Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-10-30T09:44:11.963Z Has data issue: false hasContentIssue false

Artificial intelligence and suicide prevention: A systematic review

Published online by Cambridge University Press:  15 February 2022

Alban Lejeune*
Affiliation:
URCI Mental Health Department, Brest Medical University Hospital, Brest, France
Aziliz Le Glaz
Affiliation:
URCI Mental Health Department, Brest Medical University Hospital, Brest, France
Pierre-Antoine Perron
Affiliation:
URCI Mental Health Department, Brest Medical University Hospital, Brest, France
Johan Sebti
Affiliation:
Mental Health Department, French Polynesia Hospital, FFC3+H9G, Pirae, French Polynesia
Enrique Baca-Garcia
Affiliation:
Departamento de Psiquiatria, IIS-Fundación Jiménez Díaz, Madrid, Spain
Michel Walter
Affiliation:
URCI Mental Health Department, Brest Medical University Hospital, Brest, France EA 7479 SPURBO, Université de Bretagne Occidentale, Brest, France
Christophe Lemey
Affiliation:
URCI Mental Health Department, Brest Medical University Hospital, Brest, France EA 7479 SPURBO, Université de Bretagne Occidentale, Brest, France SPURBO, IMT Atlantique, Lab-STICC, UMR CNRS 6285, F-29238, Brest, France
Sofian Berrouiguet
Affiliation:
URCI Mental Health Department, Brest Medical University Hospital, Brest, France LaTIM, INSERM, UMR 1101, Brest, France
*
*Author for correspondence: Alban Lejeune, E-mail: alban.lejeune@gmail.com

Abstract

Background

Suicide is one of the main preventable causes of death. Artificial intelligence (AI) could improve methods for assessing suicide risk. The objective of this review is to assess the potential of AI in identifying patients who are at risk of attempting suicide.

Methods

A systematic review of the literature was conducted on PubMed, EMBASE, and SCOPUS databases, using relevant keywords.

Results

Thanks to this research, 296 studies were identified. Seventeen studies, published between 2014 and 2020 and matching inclusion criteria, were selected as relevant. Included studies aimed at predicting individual suicide risk or identifying at-risk individuals in a specific population. The AI performance was overall good, although variable across different algorithms and application settings.

Conclusions

AI appears to have a high potential for identifying patients at risk of suicide. The precise use of these algorithms in clinical situations, as well as the ethical issues it raises, remain to be clarified.

Type
Review/Meta-analysis
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the European Psychiatric Association

Introduction

Overall mortality from suicide is currently about 700,000 deaths per year [1]. Suicide and suicidal behavior are a public health concern. Among people surviving a suicide attempt, about one-third come to the emergency department for help [Reference Berrouiguet, Courtet, Larsen, Walter and Vaiva2]. Suicide risk assessment of these patients is a daily challenge in psychiatric practice. The personal and family history, particularly of suicide attempts, plays a major role in assessing suicide risk. Personal history of suicide attempt is the most significant factor in the risk of death by suicide [Reference Berrouiguet, Courtet, Larsen, Walter and Vaiva2]. Patients with severe mental illness, in particular mood disorder, borderline personality disorder, and anorexia nervosa, are more likely to attempt suicide [Reference Chesney, Goodwin and Fazel3,Reference Nordentoft, Mortensen and Pedersen4]. They are at higher risk of recurrence in the years following the first attempt [Reference Runeson, Haglund, Lichtenstein and Tidemalm5]. Among patients who survived a suicide attempt, a study found that certain subgroups (alcohol consumption, personality disorder, and young age) were also more likely to attempt suicide again, and others were more likely to die by suicide after a first attempt (elderly patients) [Reference Parra-Uribe, Blasco-Fontecilla, Garcia-Parés, Martínez-Naval, Valero-Coppin and Cebrià-Meca6]. In clinical practice, particular attention must also be given to the period of discharge from the care services. The risk appears to be increased within the 2 weeks following discharge from the hospital [Reference Berrouiguet, Courtet, Larsen, Walter and Vaiva2].

Since the incidence of suicide and suicide attempts remains high [1,7,Reference Torous, Larsen, Depp, Cosco, Barnett and Nock8], we need new approaches to identify and manage patients at high risk of suicide. The current suicide risk assessment methods are based on questioning and therefore subject to subjectivity. Their accuracy and predictive value are limited [Reference Fonseka, Bhat and Kennedy9]. Several scales can be used in the suicide risk assessment, but their accuracy seems insufficient [Reference Lindh, Dahlin, Beckman, Strömsten, Jokinen and Wiktorsson10]. In their meta-analysis, Franklin et al. [Reference Franklin, Ribeiro, Fox, Bentley, Kleiman and Huang11] found that the ability to predict suicide had not improved over the past 50 years. The ability to predict a suicide attempt lack accuracy. Advances in suicide risk assessment are needed [Reference Franklin, Ribeiro, Fox, Bentley, Kleiman and Huang11].

Artificial intelligence (AI) and machine learning (ML) have emerged as ways to improve risk detection [Reference Bernert, Hilberg, Melia, Kim, Shah and Abnousi12]. These techniques require a large database (big data) to extract a patient’s profile or significant risk factors [Reference Berrouiguet, Billot, Larsen, Lopez-Castroman, Jaussent and Walter13]. AI platforms can identify patterns in the dataset to generate risk algorithms and determine the effect of risk and protective factors on suicide [Reference Fonseka, Bhat and Kennedy9]. AI has already been successfully applied to other medical disciplines (imagery, pathology, dermatology, etc.). In these disciplines, it is already faster than medical experts, with equivalent accuracy, for the diagnosis of certain pathologies. Although the diagnostic accuracy never reaches 100%, this technology combined with the skills of the clinician could greatly improve overall performance [Reference Miller and Brown14]. In psychiatry, AI could be used for diagnostic purposes, to support daily patient assessment or drug prescription. Beyond its medical value, AI could show a clear economical benefit [Reference Bernert, Hilberg, Melia, Kim, Shah and Abnousi12].

In their systematic literature review, Burke et al. identified three main goals of ML studies in suicide. The first was to improve the accuracy of risk prediction, the second was to identify important predictors and the interactions between them, and the third was to model subgroups of patients [Reference Burke, Ammerman and Jacobucci15]. The studies that focused on improving suicide risk prediction capabilities suggested a high predictive potential for this technology [Reference Torous, Larsen, Depp, Cosco, Barnett and Nock8,Reference Bernert, Hilberg, Melia, Kim, Shah and Abnousi12,Reference Desjardins, Cats-Baril, Maruti, Freeman and Althoff16,Reference Barak-Corren, Castro, Javitt, Hoffnagle, Dai and Perlis17]. At an individual level, AI could allow for better identification of individuals in crisis and appropriate intervention. At the population level, the algorithm could find groups at risk [Reference Fonseka, Bhat and Kennedy9] and individuals at risk of suicide attempt in these groups [Reference Bernecker, Zuromski, Gutierrez, Joiner, King and Liu18]. Decision support tools could also allow for a more accurate assessment of suicidal risk in situations where the patient denies suicidal ideation [Reference Zaher and Buckingham19].

In clinical practice, this technology could help the clinician to more effectively identify patients at risk of suicide, with the goal of improving predictive abilities for suicide. Further studies are required to validate this tool and apply it to clinical practice [Reference Torous, Larsen, Depp, Cosco, Barnett and Nock8,Reference Graham, Depp, Lee, Nebeker, Tu and Kim20].

In their narrative review published in 2020, D’Hotman et al. [Reference D’Hotman and Loh21] conclude that AI has a high potential in suicide risk prediction, albeit with ethical reservations regarding the use of individual data. To clarify the potential of this technology, we conducted a systematic review of the literature including clinical studies using AI to assess suicide risk. To our knowledge, this is the first systematic review on this subject. The objective of this review is to evaluate the potential of AI in predicting individual suicide risk and identifying individuals at risk of suicide attempt in a population.

Material and Methods

We used PRISMA criteria (Preferred Reporting Items for Systematic reviews and Meta Analyses) to identify, select, and critically assess relevant studies while minimizing bias.

Search strategy

We went through the bibliographic databases PubMed, SCOPUS, and EMBASE until April 2020. We based the keywords list on two fields: suicide and AI. A search strategy was built by using the Booleans operator “AND” and “OR” and applied to titles and abstracts. The keywords and the search strategy were (suicid*[Title]) AND (artificial intelligence [Title/Abstract] OR AI[Title/Abstract] OR neural network[Title/Abstract] OR deep learning[Title/Abstract] OR machine learning[Title/Abstract]) in any language, but referenced in the selected databases. To limit the selection bias, we did not apply any restriction in terms of the type of article or population. Studies that were not written in English were excluded.

Study selection

We included clinical trials and observational studies. The primary objective was to collect studies using AI to predict individual suicide risk or for identification of individuals at risk of suicide in a population. Studies were selected by two independent authors, Alban Lejeune and Sofian Berrouiguet. We excluded studies reviewing literature or studying the theoretical applications of AI without any final results. We also excluded studies studying computer programs or smartphone applications that did not use AI to assess suicide risk.

Data collection process

Data were extracted from each article independently by using a standard form. The following information was collected: the main author’s name and country of origin, year of publication, population, technology used, inclusion/ exclusion criteria, main objective, method, main endpoint, results, and authors’ conclusion.

Results

Flow chart

Figure 1 shows the PRISMA flow chart, summarizing the steps of the review. The initial search identified 296 studies. Based on the titles and abstracts, we excluded 249 studies. We downloaded the 47 remaining studies for full-text review, following which we excluded an additional 30 studies. We analyzed the 17 remaining studies that matched the inclusion criteria.

Figure 1. PRISMA flowchart outlining the study selection process.

Authors, year of publication, and country of origin

Included studies were mainly conducted in the USA (8/17, 47%), Korea (4/17, 24%), and Canada (3/17, 18%; Figure 2). Among the 17 included studies, three were written by the team of Sanderson et al. Included studies were published between 2014 and April 2020. The majority of studies were published in 2019 (10/17, 59%), three studies were published in 2018 (3/17, 18%), two studies in 2020 (2/17, 12%), one study in 2017, and one study in 2014 (Figure 3).

Figure 2. Included studies by country of origin.

Figure 3. Number of studies included by year of publication.

Studies design, populations, and sample size

Regarding the design of included studies, 13 studies had a retrospective design and 4 studies had a prospective design. Samples’ size varied between 182 and 19,061,056. Four studies used a sample size inferior to 10,000. Six studies used a sample size between 1,000 and 10,000 and seven studies used a sample size greater than 10,000. The studies were conducted on the general population in seven studies (47%), on adult patients in two studies (13%), on teenagers or young adults in four studies (27%), on an ethnic group or a particular subgroup in two studies (13%) and on militaries in two studies (13%). In the PRISMA quality assessment, the included studies obtained heterogeneous scores. Their scores ranged from 29 to 46 (see Supplementary File S2 and Figure 4).

Figure 4. PRISMA quality assessment of the included studies.

Technologies used

Included studies used one or several AI technologies. The main algorithms used were the logistic regression (LR; 9/17, 53%), the random forest (6/17, 35%), the gradient-boosting algorithms (3/17, 18%), the LASSO (2/17, 12%), and the support vector machine (SVM; 2/17, 12%). Six studies used at least one type of neural network (NN; 35%). Most studies used cross validation (15/17, 88%; Figure 5). The most used ML feature was supervised learning. None of the included studies used the data augmentation technique.

Figure 5. Main AI types used.

Abbreviations: AI, artificial intelligence; CR, cox regression; DT, decision tree; LR, logistic regression; NN, neural network; RF, random forest; SVM, support vector machine; XGB/GBT, extreme gradient boosting/gradient boosted tree.

Objective of the studies and performance of algorithms

Main results

The two fields of interest were the suicide risk prediction and the identification of people at risk in a given population, using one or more AI technologies. Among the 17 studies included, 13 studies (76%) mainly aimed at predicting the individual suicide risk. Four studies (24%) mainly aimed at identifying individuals at risk in a population.

In the prediction of individual suicidal risk, the different studies found an area under the curve (AUC) performance between 0.604 and 0.947. NNs and boosted gradient algorithms appeared to perform best in the studies that used them (see Table 1 and Figure 6). The performance of the different algorithms was mainly informed with the parameter AUC (Supplementary File S1). Four studies informed other parameters (sensibility; sensitivity; accuracy; and true and false predictive value). Among included studies, three studies were conducted in Canada by Sanderson et al. [Reference Sanderson, Bulloch, Wang, Williamson and Patten22Reference Sanderson, Bulloch, Wang, Williamson and Patten24]. These studies focused on comparing the relative performance of different algorithms as well as NNs in predicting suicidal risk. It appears that NNs and gradient boosted algorithms (XGBs) seem to bring a significant improvement compared to LR models. In their last article [Reference Sanderson, Bulloch, Wang, Williams, Williamson and Patten23], an XGB algorithm is compared to LR models regarding the prediction of suicide risk during the 90 days following the discharge from an emergency department. The XGB model then provides superior discrimination and calibration, with an accuracy that could allow clinical application (AUC 0.88). A Korean team [Reference Choi, Lee, Yoon, Won and Kim25] has however obtained a lower performance with a NN than with a cox regression (CR) or SVM algorithm.

Table 1. Performance in the prediction of suicide risk with the main algorithms, expressed in AUC, in studies in which this value was informed.

Abbreviations: AUC, area under the curve; BN, Bayesian network; CR, cox regression; LR, logistic regression; NN, neural network; RF, random forest; SVM, support vector machine; XGB/GBT, extreme gradient boosting/gradient boosted tree.

Figure 6. Performance in AUC of the different algorithms, based on the studies included in Table 1.

Abbreviatioins: AUC, area under the curve; BN, Bayesian network; DT, decision tree; LR, logistic regression; NN, neural network; RF, random forest; XGB/GBT, extreme gradient boosting/gradient boosted tree.

In the identification of at-risk individuals in a specific population, the results were presented in sensitivity/specificity/precision. The AUC was reported in one article [Reference Haroz, Walsh, Goklish, Cwik, O’Keefe and Barlow26].

Most studies (15/17) used cross validation to prevent overfitting. One team [Reference Walsh, Ribeiro and Franklin27] used bootstrapping with optimism adjustment instead of cross validation. The predictive models were trained and tested on all study data. The optimism of the model was estimated by repeating the same steps on bootstrapped replicates, and by subtracting from the performance the summed difference between bootstrapped replicates.

Data used

Data used to supply ML models were mainly those from health systems. A study [Reference Simon, Shortreed, Johnson, Rossom, Lynch and Ziebell29] compared a model using only data from patient records from a health system database with several models using the same data combined with additional data, particularly clinical data (sociodemographic data, results from a questionnaire, data from an index consultation). The aim was to predict suicidal risk in the 90 days following a consultation for suicidal ideation. This study found approximately equivalent performance between the models (AUC 0.843 vs. 0.850) [Reference Simon, Shortreed, Johnson, Rossom, Lynch and Ziebell29]. Data collected during the medical visit provided a statistically significant improvement in the prediction of suicidal risk, but with a low effect size. A Korean team [Reference Ryu, Lee, Lee, Kim and Kim30] sought to identify patients at risk of suicide among those who expressed suicidal ideations in a self-administered questionnaire, by analyzing retrospective data from a national database. This team reported good overall performance, with an accuracy of 88.9% and an AUC of 0.947.

Among the included studies, four studies used a prospective design. Zheng et al. [Reference Zheng, Wang, Hao, Ye, Liu and Xia28] used a deep NN to prospectively predict the one-year suicide risk and identify people at risk of suicide. The data used were solely from a health database. Performance was acceptable with an AUC of 0.769 (95% CI: 0.721–0.817). This deep learning model significantly improved performance, compared to other algorithms tested on the same cohort in this study (XGB: AUC 0.702; LR: AUC 0.604). A study [Reference Bhak, Jeong, Cho, Jeon, Cho and Gim33] sought to prospectively identify patients with a major depressive disorder and suicidal patients by analyzing blood markers (methylomes and transcriptomes) on a small sample. Their random forest model found an accuracy of 92.6% to distinguish suicidal from characterized depressive episodes and an accuracy of 86.7% to distinguish suicidal from control subjects. Miché et al. also prospectively assessed the risk of suicide attempts in adolescents and young adults [Reference Miché, Studerus, Meyer, Gloster, Beesdo-Baum and Wittchen32]. Hill et al. [Reference Hill, Oosterhoff and Do34] sought to prospectively identify adolescents who would attempt suicide from a large sample.

Outcomes in teenagers and young adults

Four studies have investigated the prediction of suicidal risk or the identification of at-risk patients in adolescents and young adults. Miché et al. [Reference Miché, Studerus, Meyer, Gloster, Beesdo-Baum and Wittchen32] studied four ML algorithms in suicide attempt risk assessment. They found close performance between these (AUC between 0.824 and 0.829) for patients aged from 14 to 24 years. The highest AUC was obtained with ridge regression. Hill et al. [Reference Hill, Oosterhoff and Do34] sought to identify patients at risk for suicide attempts among a large cohort of 4,834 teenagers during 12 months. Two classification trees had reached a higher risk prediction, with a sensitivity/ specificity profile of 69.8%/85.7% for the first and 90.6%/70.9% for the second. A Korean study [Reference Jung, Park, Kim, Na, Kim and Kim35] aimed to retrospectively identify at-risk patients in a national database, including 59,084 teenagers. All the models used had an accuracy between 77.5 and 79%, comparable to the accuracy of the LR (77.9%). The most accurate model was XGB (79%) and the least accurate model in this study was an artificial neural network (ANN) (77.5%). In 2018, Walsh et al. [Reference Walsh, Ribeiro and Franklin27] studied the prediction of adolescent suicide attempts through a retrospective longitudinal cohort. Several time periods were analyzed (from 1 week to 2 years). A random forest model was compared to LR. Performance was good, without the need for a face-to-face meeting (AUC approximately between 0.8 and 0.9 depending on the time window chosen, with the AUC being better the more imminent the suicide attempt was).

Use of AI in specific populations

Five publications studied specific populations. Haroz et al. [Reference Haroz, Walsh, Goklish, Cwik, O’Keefe and Barlow26] tried to identify at-risk patients among a Native American community during the 24 months following an initial suicide attempt. With four ML algorithms, they obtained an AUC between 0.81 (decision tree) and 0.87 (ridge regression). In comparison, the AUC for previous suicide attempt was 0.57. Lyu et al. [Reference Lyu and Zhang36] used a backpropagation NN to predict suicide risk in rural China inhabitants, with a total coincidence rate of 84.6%. Kessler et al. [Reference Kessler, Hwang, Hoffmire, McCarthy, Petukhova and Rosellini37] aimed to identify American veterans at high risk of suicide. They found a similar sensibility between algorithms for detecting at-risk veterans. The best-performing model in this study was the Bayesian additive regression tree algorithm, with 28% of suicides included in the 5% of veterans detected as a highest risk by the algorithm. Poulin et al. [Reference Poulin, Shiner, Thompson, Vepstas, Young-Xu and Goertzel38] sought to identify veterans at risk for suicide through analysis of each patient’s medical observations. With a supervised ML algorithm retrospectively analyzing medical records of veterans who died by suicide, they obtained an accuracy of 67–69%.

Gradus et al. [Reference Gradus, Rosellini, Horváth-Puhó, Street, Galatzer-Levy and Jiang31] sought to predict suicide risk according to gender by using several Danish databases, thus including a very large sample of patients. With a random forest algorithm, they obtained a good predictive performance of suicide risk (AUC of 0.80 in men and 0.88 in women).

Discussion

Main results

Our review shows an exponential gain in interest in the application of AI in the field of suicide prevention. The selected studies were all published between 2014 and 2020. Several studies have been published since the end of this review [Reference Chen, Zhang-James, Barnett, Lichtenstein, Jokinen and D’Onofrio39Reference Cho, Geem and Na41] which demonstrates the interest in this subject. A Korean team conducted a meta-analysis in April 2021 to directly compare the predictive capabilities of four leading suicide theories to ML [Reference Schafer, Kennedy, Gallyer and Resnik42]. This growing interest is in line with the major development of AI, which is currently one of the main emerging technologies.

This article has provided an inventory of studies using AI to assess individual suicide risk and to identify patients at high risk of suicide. These studies suggest that AI could be an effective technology for this purpose, with several algorithms used and reproducible results in different populations. This review is, to our knowledge, the first systematic review on this topic.

Limitations of this study

This preliminary study has several limitations. Firstly, it includes a low number of studies. The number of studies using AI for suicide prevention is increasing exponentially, but the number of published studies is still low to date. However, this study does not aim at a precise evaluation of the performance of AI in a given situation, but at an assessment of the potential of this technology. We have listed the performance of the different algorithms tested to better assess this potential. Secondly, the selected studies were published in a small number of countries. Most studies (88%) were published in the USA, Canada, or Korea. These countries have significant health databases, and AI could be more challenging to implement in countries or regions where patient data is less accessible or less accurate [Reference Marques de Souza Filho, de Amorim Fernandes, Lacerda de Abreu Soares, Luiz Seixas, Augusto Sarmet, dos Santos and Altenburg Gismondi43].

The studies included used mostly retrospective data. The performance of these algorithms may be lower in clinical practice, in heterogeneous populations, and with prospective data. The performance of AI in clinical situations is still unknown at this time and remains to be clarified.

Finally, some studies did not use a cross-validation technique to limit overfitting. Their results may therefore have been over-optimistic. However, only two of the included studies did not use cross validation.

Feasibility, recommendation for use of AI

The included studies that used AI to predict suicidal AAP found overall good performance on the most commonly used algorithms (LR, XGB/GBT, NN, and RF) with an AUC approximately between 0.8 and 0.9 (see Table 1 and Figure 5). The data required to achieve such a performance is probably less voluminous than initially assumed. Some of the included studies indeed found a good performance with the use of health system data alone [Reference Choi, Lee, Yoon, Won and Kim25,Reference Simon, Shortreed, Johnson, Rossom, Lynch and Ziebell29]. It is possible that the application of AI to the health system data already collected would be sufficient to allow a significant advance in the prediction of suicide risk. It could be a first step towards the use of this technology. The use of complex algorithms is likely to lead to better performance, but some simpler algorithms, such as LR, have relatively close performance at present. These algorithms are probably easier and cheaper to implement in the health system.

If the performance of this technology is similar in clinical practice, it could lead to a more accurate prediction of suicidal risk and thus to significant changes in the management of patients at risk of suicide.

Patients’ data collection: ethical reflection

AI raises ethical concerns within the medical community [Reference Benke and Benke44,Reference Le Glaz, Haralambous, Kim-Dufor, Lenca, Billot and Ryan45], especially regarding the use of data, the place of the practitioner in care, and ethical and medico-legal responsibility.

Patients’ data already have an important place in the assessment and management of suicidal behavior. Practitioners use patients’ data, such as their personal and familial history, their care history, their socio-demographic, and ethnical characteristics, to evaluate the risk of suicide attempt. AI can optimize the analysis of these data and thus yields a better efficiency. The application of AI to health data will require robust cyber security, as well as a clear legal framework.

AI is complementary to the medical assessment and does not replace it. Optimal performance will probably be reached through the proper use of AI by the physician, with holistic patient care. The doctor–patient relationship will remain essential in patient care. AI could raise a responsibility problem. This question remains unresolved. AI could allow an information gain. The clinicians will have more elements to assess a situation and lead his management. It seems to us that the responsibility will always go to the clinician, once he is informed of the performance and limitations of this technology in precise clinical situations. The physician will then be able to organize personalized care. The medical profession is already proceeding in this way for other technologies used in medicine. Current data suggest an interesting performance of AI in suicide prevention and justify a more precise exploration of this tool.

Conclusion

AI is increasingly being used in suicide research, with a recent increase in the number of studies published. This technology may allow a significant evolution in suicide risk assessment, with a more accurate and reliable assessment than with present methods. This tool is likely to become more accessible in the coming years. AI is already being used successfully in other medical disciplines. This technology will probably have its place as a complement to existing tools in suicide prevention. However, AI is not yet usable in clinical practice. The performance presented in this article is based on retrospective data. The performance of AI in clinical practice remains unknown. Further studies are required to clarify the value of this technology in suicide risk assessment, including prospective studies in clinical application.

Supplementary Materials

To view supplementary material for this article, please visit http://doi.org/10.1192/j.eurpsy.2022.8.

Acknowledgement

The authors are grateful to Lucy AMAT and Claude SEIXAS for their collaboration.

Data Availability Statement

All the studies included in this paper are available on the quoted databases (PubMed, EmBASE, and SCOPUS). Extracted data from the studies are available in Supplementary File S2.

Author Contributions

Conceptualization: A.L., S.B.; Investigation: A.L.; Methodology: A.L., S.B.; Supervision: E.B.-G., M.W., C.L., S.B.; Validation: A.L., A.L.G.; Writing—original draft: A.L., P.-A.P. Writing—review and editing: A.L., A.L.G., J.S., S.B.

Financial Support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Conflicts of Interest

The authors declare none.

Footnotes

Present address of Alban Lejeune: Hôpital de Bohars, CHRU de Brest, Brest, France.

References

World Health Organization. Suicide worldwide in 2019: global health estimates [Internet]. Geneva: World Health Organization; 2021, https://apps.who.int/iris/handle/10665/341728.Google Scholar
Berrouiguet, S, Courtet, P, Larsen, ME, Walter, M, Vaiva, G. Suicide prevention: towards integrative, innovative and individualized brief contact interventions. Eur Psychiatry. 2018;47:25–6.CrossRefGoogle ScholarPubMed
Chesney, E, Goodwin, GM, Fazel, S. Risks of all-cause and suicide mortality in mental disorders: a meta-review. World Psychiatry. 2014;13(2):153–60.CrossRefGoogle ScholarPubMed
Nordentoft, M, Mortensen, PB, Pedersen, CB. Absolute risk of suicide after first hospital contact in mental disorder. Arch Gen Psychiatry. 2011;68(10):1058–64.CrossRefGoogle ScholarPubMed
Runeson, B, Haglund, A, Lichtenstein, P, Tidemalm, D. Suicide risk after nonfatal self-harm: a national cohort study, 2000–2008. J Clin Psychiatry. 2016;77(2):240–6.CrossRefGoogle ScholarPubMed
Parra-Uribe, I, Blasco-Fontecilla, H, Garcia-Parés, G, Martínez-Naval, L, Valero-Coppin, O, Cebrià-Meca, A, et al. Risk of re-attempts and suicide death after a suicide attempt: a survival analysis. BMC Psychiatry. 2017;17(1):163.CrossRefGoogle ScholarPubMed
Observatoire national du suicide (France). Suicide: quels liens avec le travail? Penser la prévention et les systèmes d’information: 4ème rapport; 2020.Google Scholar
Torous, J, Larsen, ME, Depp, C, Cosco, TD, Barnett, I, Nock, MK, et al. Smartphones, sensors, and machine learning to advance real-time prediction and interventions for suicide prevention: a review of current progress and next steps. Curr Psychiatry Rep. 2018;20(7):51.CrossRefGoogle ScholarPubMed
Fonseka, TM, Bhat, V, Kennedy, SH. The utility of artificial intelligence in suicide risk prediction and the management of suicidal behaviors. Aust N Z J Psychiatry. 2019;53(10):954–64.CrossRefGoogle ScholarPubMed
Lindh, ÅU, Dahlin, M, Beckman, K, Strömsten, L, Jokinen, J, Wiktorsson, S, et al. A comparison of suicide risk scales in predicting repeat suicide attempt and suicide: a clinical cohort study. J Clin Psychiatry. 2019;80(6), https://www.psychiatrist.com/JCP/article/Pages/2019/v80/18m12707.aspx.CrossRefGoogle ScholarPubMed
Franklin, JC, Ribeiro, JD, Fox, KR, Bentley, KH, Kleiman, EM, Huang, X, et al. Risk factors for suicidal thoughts and behaviors: a meta-analysis of 50 years of research. Psychol Bull.. 2017;143(2):187232.CrossRefGoogle ScholarPubMed
Bernert, RA, Hilberg, AM, Melia, R, Kim, JP, Shah, NH, Abnousi, F. Artificial intelligence and suicide prevention: a systematic review of machine learning investigations. Int J Environ Res Public Health. 2020;17(16):5929.CrossRefGoogle ScholarPubMed
Berrouiguet, S, Billot, R, Larsen, ME, Lopez-Castroman, J, Jaussent, I, Walter, M, et al. An approach for data mining of electronic health record data for suicide risk management: database analysis for clinical decision support. JMIR Ment Health. 2019;6(5):e9766.CrossRefGoogle ScholarPubMed
Miller, DD, Brown, EW. Artificial intelligence in medical practice:the question to the answer? Am J Med. 2018;131(2):129–33.CrossRefGoogle Scholar
Burke, TA, Ammerman, BA, Jacobucci, R. The use of machine learning in the study of suicidal and non-suicidal self-injurious thoughts and behaviors: a systematic review. J Affect Disord. 2019;245:869–84.CrossRefGoogle Scholar
Desjardins, I, Cats-Baril, W, Maruti, S, Freeman, K, Althoff, R. Suicide risk assessment in hospitals: an expert system-based triage tool. J Clin Psychiatry. 2016;77(7):e874–82.CrossRefGoogle ScholarPubMed
Barak-Corren, Y, Castro, VM, Javitt, S, Hoffnagle, AG, Dai, Y, Perlis, RH, et al. Predicting suicidal behavior from longitudinal electronic health records. Am J Psychiatry. 2017;174(2):154–62.CrossRefGoogle ScholarPubMed
Bernecker, SL, Zuromski, KL, Gutierrez, PM, Joiner, TE, King, AJ, Liu, H, et al. Predicting suicide attempts among soldiers who deny suicidal ideation in the army study to assess risk and resilience in servicemembers (Army STARRS). Behav Res Ther. 2019;120:103350.CrossRefGoogle Scholar
Zaher, NA, Buckingham, CD. Moderating the influence of current intention to improve suicide risk prediction. AMIA Annu Symp Proc. 2016:1274–82.Google ScholarPubMed
Graham, S, Depp, C, Lee, EE, Nebeker, C, Tu, X, Kim, H-C, et al. Artificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep. 2019;21(11):116.CrossRefGoogle ScholarPubMed
D’Hotman, D, Loh, E. AI enabled suicide prediction tools: a qualitative narrative review. BMJ Health Care Inform. 2020;27(3):e100175.CrossRefGoogle ScholarPubMed
Sanderson, M, Bulloch, AG, Wang, J, Williamson, T, Patten, SB. Predicting death by suicide using administrative health care system data: can recurrent neural network, one-dimensional convolutional neural network, and gradient boosted trees models improve prediction performance? J Affect Disord. 2019;264:107–14.CrossRefGoogle ScholarPubMed
Sanderson, M, Bulloch, AG, Wang, J, Williams, KG, Williamson, T, Patten, SB. Predicting death by suicide following an emergency department visit for parasuicide with administrative health care system data and machine learning. EClinicalMedicine. 2020;20:100281.CrossRefGoogle ScholarPubMed
Sanderson, M, Bulloch, AGM, Wang, J, Williamson, T, Patten, SB. Predicting death by suicide using administrative health care system data: can feedforward neural network models improve upon logistic regression models? J Affect Disord. 2019;257:741–7.CrossRefGoogle ScholarPubMed
Choi, SB, Lee, W, Yoon, J-H, Won, J-U, Kim, DW. Ten-year prediction of suicide death using cox regression and machine learning in a nationwide retrospective cohort study in South Korea. J Affect Disord. 2018;231:814.CrossRefGoogle Scholar
Haroz, EE, Walsh, CG, Goklish, N, Cwik, MF, O’Keefe, V, Barlow, A. Reaching those at highest risk for suicide: development of a model using machine learning methods for use with native American communities. Suicide Life Threat Behav. 2020;50(2):422–36.CrossRefGoogle Scholar
Walsh, CG, Ribeiro, JD, Franklin, JC. Predicting suicide attempts in adolescents with longitudinal clinical data and machine learning. J Child Psychol Psychiatr. 2018;59(12):1261–70.CrossRefGoogle ScholarPubMed
Zheng, L, Wang, O, Hao, S, Ye, C, Liu, M, Xia, M, et al. Development of an early-warning system for high-risk patients for suicide attempt using deep learning and electronic health records. Transl Psychiatry. 2020;10(1):72.CrossRefGoogle ScholarPubMed
Simon, GE, Shortreed, SM, Johnson, E, Rossom, RC, Lynch, FL, Ziebell, R, et al. What health records data are required for accurate prediction of suicidal behavior? J Am Med Inform Assoc. 2019;26(12):1458–65.CrossRefGoogle ScholarPubMed
Ryu, S, Lee, H, Lee, D-K, Kim, S-W, Kim, C-E. Detection of suicide attempters among suicide ideators using machine learning. Psychiatry Investig. 2019;16(8):588–93.CrossRefGoogle ScholarPubMed
Gradus, JL, Rosellini, AJ, Horváth-Puhó, E, Street, AE, Galatzer-Levy, I, Jiang, T, et al. Prediction of sex-specific suicide risk using machine learning and single-payer health care registry data from Denmark. JAMA Psychiatry. 2019;77(1):25.CrossRefGoogle Scholar
Miché, M, Studerus, E, Meyer, AH, Gloster, AT, Beesdo-Baum, K, Wittchen, H-U, et al. Prospective prediction of suicide attempts in community adolescents and young adults, using regression methods and machine learning. J Affect Disord. 2019;265:570–8.CrossRefGoogle Scholar
Bhak, Y, Jeong, H-O, Cho, YS, Jeon, S, Cho, J, Gim, J-A, et al. Depression and suicide risk prediction models using blood-derived multi-omics data. Transl Psychiatry. 2019;9(1):262.CrossRefGoogle ScholarPubMed
Hill, RM, Oosterhoff, B, Do, C. Using machine learning to identify suicide risk: a classification tree approach to prospectively identify adolescent suicide attempters. Arch Suicide Res. 2019;24(2):218–35.CrossRefGoogle ScholarPubMed
Jung, JS, Park, SJ, Kim, EY, Na, K-S, Kim, YJ, Kim, KG. Prediction models for high risk of suicide in Korean adolescents using machine learning techniques. PLoS ONE. 2019;14(6):e0217639.CrossRefGoogle ScholarPubMed
Lyu, J, Zhang, J. BP neural network prediction model for suicide attempt among Chinese rural residents. J Affect Disord. 2018;246:465–73.CrossRefGoogle ScholarPubMed
Kessler, RC, Hwang, I, Hoffmire, CA, McCarthy, JF, Petukhova, MV, Rosellini, AJ, et al. Developing a practical suicide risk prediction model for targeting high-risk patients in the veterans health administration. Int J Methods Psychiatr Res. 2017;26(3). doi:10.1002/mpr.1575.CrossRefGoogle ScholarPubMed
Poulin, C, Shiner, B, Thompson, P, Vepstas, L, Young-Xu, Y, Goertzel, B, et al. Predicting the risk of suicide by analyzing the text of clinical notes. PLoS ONE. 2014;9(1):e85733.CrossRefGoogle ScholarPubMed
Chen, Q, Zhang-James, Y, Barnett, EJ, Lichtenstein, P, Jokinen, J, D’Onofrio, BM, et al. Predicting suicide attempt or suicide death following a visit to psychiatric specialty care: a machine learning study using Swedish national registry data. PLoS Med. 2020;17(11):e1003416.CrossRefGoogle ScholarPubMed
Kessler, RC, Bauer, MS, Bishop, TM, Demler, OV, Dobscha, SK, Gildea, SM, et al. Using administrative data to predict suicide after psychiatric hospitalization in the veterans health administration system. Front Psychiatry. 2020;11:390.CrossRefGoogle ScholarPubMed
Cho, S-E, Geem, ZW, Na, K-S. Development of a suicide prediction model for the elderly using health screening data. Int J Environ Res Public Health. 2021;18(19):10150.CrossRefGoogle ScholarPubMed
Schafer, KM, Kennedy, G, Gallyer, A, Resnik, P. A direct comparison of theory-driven and machine learning prediction of suicide: a meta-analysis. PLoS ONE. 2021;16(4):e0249833.CrossRefGoogle ScholarPubMed
Marques de Souza Filho, E, de Amorim Fernandes, F, Lacerda de Abreu Soares, C, Luiz Seixas, F, Augusto Sarmet, MD, dos Santos, A, Altenburg Gismondi, R, et al. Inteligência Artificial em Cardiologia: Conceitos, Ferramentas e Desafios—“Quem Corre é o Cavalo, Você Precisa ser o Jóquei”: Inteligência Artificial em Cardiologia. ABC Cardiol. 2019, https://www.scielo.br/scielo.php?script=sci_arttext&pid=S0066-782X2019005022109.CrossRefGoogle Scholar
Benke, K, Benke, G. Artificial intelligence and big data in public health. Int J Environ Res Public Health. 2018;15(12):2796.CrossRefGoogle ScholarPubMed
Le Glaz, A, Haralambous, Y, Kim-Dufor, D-H, Lenca, P, Billot, R, Ryan, TC, et al. Machine learning and natural language processing in mental health: systematic review. J Med Internet Res. 2021;23(5):e15708.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. PRISMA flowchart outlining the study selection process.

Figure 1

Figure 2. Included studies by country of origin.

Figure 2

Figure 3. Number of studies included by year of publication.

Figure 3

Figure 4. PRISMA quality assessment of the included studies.

Figure 4

Figure 5. Main AI types used.Abbreviations: AI, artificial intelligence; CR, cox regression; DT, decision tree; LR, logistic regression; NN, neural network; RF, random forest; SVM, support vector machine; XGB/GBT, extreme gradient boosting/gradient boosted tree.

Figure 5

Table 1. Performance in the prediction of suicide risk with the main algorithms, expressed in AUC, in studies in which this value was informed.

Figure 6

Figure 6. Performance in AUC of the different algorithms, based on the studies included in Table 1.Abbreviatioins: AUC, area under the curve; BN, Bayesian network; DT, decision tree; LR, logistic regression; NN, neural network; RF, random forest; XGB/GBT, extreme gradient boosting/gradient boosted tree.

Supplementary material: File

Lejeune et al. supplementary material

Lejeune et al. supplementary material 1

Download Lejeune et al. supplementary material(File)
File 31.9 KB
Supplementary material: File

S0924933822000086sup002.xlsx

Lejeune et al. supplementary material 2

Download S0924933822000086sup002.xlsx(File)
File 22 KB
Submit a response

Comments

No Comments have been published for this article.