Hostname: page-component-89b8bd64d-46n74 Total loading time: 0 Render date: 2026-05-07T02:07:04.938Z Has data issue: false hasContentIssue false

Predicting involuntary admission following inpatient psychiatric treatment using machine learning trained on electronic health record data

Published online by Cambridge University Press:  18 November 2024

Erik Perfalk*
Affiliation:
Department of Affective Disorders, Aarhus University Hospital – Psychiatry, Aarhus, Denmark Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
Jakob Grøhn Damgaard
Affiliation:
Department of Affective Disorders, Aarhus University Hospital – Psychiatry, Aarhus, Denmark Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
Martin Bernstorff
Affiliation:
Department of Affective Disorders, Aarhus University Hospital – Psychiatry, Aarhus, Denmark Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
Lasse Hansen
Affiliation:
Department of Affective Disorders, Aarhus University Hospital – Psychiatry, Aarhus, Denmark Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
Andreas Aalkjær Danielsen
Affiliation:
Department of Affective Disorders, Aarhus University Hospital – Psychiatry, Aarhus, Denmark Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
Søren Dinesen Østergaard
Affiliation:
Department of Affective Disorders, Aarhus University Hospital – Psychiatry, Aarhus, Denmark Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
*
Corresponding author: Erik Perfalk; Email: erperf@rm.dk
Rights & Permissions [Opens in a new window]

Abstract

Background

Involuntary admissions to psychiatric hospitals are on the rise. If patients at elevated risk of involuntary admission could be identified, prevention may be possible. Our aim was to develop and validate a prediction model for involuntary admission of patients receiving care within a psychiatric service system using machine learning trained on routine clinical data from electronic health records (EHRs).

Methods

EHR data from all adult patients who had been in contact with the Psychiatric Services of the Central Denmark Region between 2013 and 2021 were retrieved. We derived 694 patient predictors (covering e.g. diagnoses, medication, and coercive measures) and 1134 predictors from free text using term frequency-inverse document frequency and sentence transformers. At every voluntary inpatient discharge (prediction time), without an involuntary admission in the 2 years prior, we predicted involuntary admission 180 days ahead. XGBoost and elastic net models were trained on 85% of the dataset. The models with the highest area under the receiver operating characteristic curve (AUROC) were tested on the remaining 15% of the data.

Results

The model was trained on 50 634 voluntary inpatient discharges among 17 968 patients. The cohort comprised of 1672 voluntary inpatient discharges followed by an involuntary admission. The best XGBoost and elastic net model from the training phase obtained an AUROC of 0.84 and 0.83, respectively, in the test phase.

Conclusion

A machine learning model using routine clinical EHR data can accurately predict involuntary admission. If implemented as a clinical decision support tool, this model may guide interventions aimed at reducing the risk of involuntary admission.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided that no alterations are made and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use and/or adaptation of the article.
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press
Figure 0

Figure 1. Extraction of data and outcome, dataset splitting, prediction time filtering, specification of predictors and flattening, model training, testing, and evaluation. This figure was modified to this project based on Bernstorff et al. (2024). IA, involuntary admissions; F1 and F2, ICD-10 diagnoses within the group of diagnoses included in F1 and F2 chapters; CV, cross-validation; TP, true positive; FP, false positive; TN, true negative; FN, false negative.

Figure 1

Table 1. Descriptive statistics for prediction times

Figure 2

Table 2. Model performance after cross-validation hyperparameter tuning for XGBoost and elastic net models trained on different subsets of the predictors (2A) and different lookaheads (2B)

Figure 3

Figure 2. Model performance of the XGBoost model in the test set. (a) Receiver operating characteristics curve. AUROC, area under the receiver operating characteristics curve. (b) Confusion matrix. PPR, positive predictive rate; NPV, negative predictive value; IA, involuntary admission. The decision threshold is defined based on a PPR of 5%. (c) Sensitivity (at the same specificity) by months from prediction time to event, stratified by desired PPR.

Figure 4

Figure 3. Model performance of the elastic net model in the test set. (a) Receiver operating characteristics curve. AUROC, area under the receiver operating characteristics curve. (b) Confusion matrix. PPR, positive predictive rate; NPV, negative predictive value; IA, involuntary admission. The decision threshold is defined based on a PPR of 5%. (c) Sensitivity (at the same specificity) by months from prediction time to event, stratified by desired PPR.

Figure 5

Table 3. Performance metrics on test set for model trained on full predictor set at varying positive rates

Supplementary material: File

Perfalk et al. supplementary material

Perfalk et al. supplementary material
Download Perfalk et al. supplementary material(File)
File 12 MB