Hostname: page-component-68c7f8b79f-pksg9 Total loading time: 0 Render date: 2026-01-15T07:41:11.087Z Has data issue: false hasContentIssue false

Artificial intelligence and deskilling in medicine

Published online by Cambridge University Press:  08 January 2026

Scott Monteith*
Affiliation:
Michigan State University College of Human Medicine, Traverse City Campus, Traverse City, Michigan, USA
Tasha Glenn
Affiliation:
ChronoRecord Association, Fullerton, California, USA
John Richard Geddes
Affiliation:
Department of Psychiatry, University of Oxford, Oxford, UK
Peter C. Whybrow
Affiliation:
Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles (UCLA), Los Angeles, California, USA
Eric D. Achtyes
Affiliation:
Department of Psychiatry, Western Michigan University Homer Stryker M.D., School of Medicine, Kalamazoo, Michigan, USA
Rita Bauer
Affiliation:
Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Technische Universität Dresden, Dresden, Germany
Michael Bauer
Affiliation:
Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Technische Universität Dresden, Dresden, Germany
*
Correspondence: Scott Monteith. Email: monteit2@msu.edu
Rights & Permissions [Opens in a new window]

Abstract

Artificial intelligence is increasingly being used in medical practice to complete tasks that were previously completed by the physician, such as visit documentation, treatment plans and discharge summaries. As artificial intelligence becomes a routine part of medical care, physicians increasingly trust and rely on its clinical recommendations. However, there is concern that some physicians, especially those younger and less experienced, will become over-reliant on artificial intelligence. Over-reliance on it may reduce the quality of clinical reasoning and decision-making, negatively impact patient communications and raise the potential for deskilling. As artificial intelligence becomes a routine part of medical treatment, it is imperative that physicians recognise the limitations of artificial intelligence tools. These tools may assist with basic administrative tasks but cannot replace the uniquely human interpersonal and reasoning skills of physicians. The purpose of this feature article is to discuss the risks of physician deskilling based on increasing reliance on artificial intelligence.

Information

Type
Feature
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of Royal College of Psychiatrists

Artificial intelligence is increasingly being used in medical practice for visit documentation, treatment plans and discharge summaries, 1 which are all tasks that were previously completed by the physician. There has been a rapid growth of FDA-approved artificial intelligence devices in medicine, with 1016 FDA-approved devices as of March 2025, 2 an increase from 64 devices in 2020. Reference Benjamens and Dhunnoo3 With the increased use of this facility in medicine for not only administrative but clinical tasks, there is concern that some physicians could become overly reliant on this tool. Every physician brings their unique background and knowledge to the exam room. When physicians rely on artificial intelligence, patients may no longer benefit from this experience. As physicians increasingly trust and rely on the recommendations of this tool in medical practice, the potential for the deskilling of physicians exists. The purpose of this paper is to discuss the risks of physician deskilling based on increasing reliance on artificial intelligence.

Growth of artificial intelligence

Diverse factors are contributing to the rapid growth of artificial intelligence technology in medicine. This tool is being used to provide skills that are missing due to shortages in the healthcare workforce. 4 The technology decreases the time spent on relevant information gathering. Reference Lee, Sarkar, Tankelevitch, Drosos, Rintel and Banks5 Clinical understanding of a patient often develops over multiple visits, and includes non-verbal cues, physical examination, overall appearance, body language, as well as spoken language. Reference Wartman and Densen6 These technologies function at superhuman speeds, providing services and solving tasks with very little human effort or skill required, and leading to widespread deskilling. Reference Thiele7 The use of artificial intelligence cannot replace the human aspect of the doctor–patient relationship. The communication skills of physicians may diminish when patient information can be accessed and shared without human interaction. Reference Verghese8

The adaption of artificial intelligence in routine medical care will require staff training at all experience levels. The technology may be used in medical school education to help teach clinical decision-making, including simulations for rare conditions. However, some feel that this technology cannot replicate the subtleties in disease presentation that clinicians must identify. Reference Lewin, Chetty, Ihdayhid and Dwivedi9 As time passes, many clinical staff will have no experience with the skills used to train the artificial intelligence automation.

Overreliance on artificial intelligence

Deskilling refers to a loss of knowledge, decision-making ability and autonomy to perform a specific task or job. Deskilling may occur after the implementation of new technologies, and in medicine may result in decreased clinical knowledge and confidence in clinical decision-making. Reference Hoff10 Staff spend increasing amounts of time comfortably using artificial intelligence products and tools, without understanding how these tools function. Reference Thiele7 However, high confidence in the ability of artificial intelligence to successfully perform a task may decrease the critical thinking of staff, diminishing problem-solving abilities, and shifting efforts to information verification. Reference Lee, Sarkar, Tankelevitch, Drosos, Rintel and Banks5 To ensure the reliability of artificial intelligence output, verification requires staff efforts including comparison with external sources and using their own knowledge. Reference Lee, Sarkar, Tankelevitch, Drosos, Rintel and Banks5 Less experienced clinicians may particularly rely on this technology, accepting its decisions without question. Without training to interpret and integrate the results, and recognise exceptions, staff may be unaware of the challenges and risks of artificial intelligence products in both routine and exceptional situations. Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer11 When automation requires fewer skills to complete a task, technology failures may lead to major disruptions or inefficiencies which may impair physician performance. Reference Cabitza, Rasoini and Gensini12 In safety-critical situations such as medicine, the introduction of artificial intelligence will lead to new, unanticipated safety risks, including new failure paths. Reference Mongan and Kohli13

Applications based on the technology are often described as having a levelling effect, helping novices more than experts. Reference Crowston and Bolici14 Along with human skill and experience levels, the design of an artificial intelligence product and implementation into the overall workflow will impact deskilling, and which staff are affected. Reference Crowston and Bolici14

Additional risks using artificial intelligence

The risks associated with these products, and the challenge of deskilling, may be more impactful for newly educated physicians who lack experience completing the tasks before automation, and may increase over time with a newer workforce. Reference Nilsen, Sundemo, Heintz, Neher, Nygren and Svedberg15 Additionally, physicians may continue to use biased recommendations from artificial intelligence decision support systems when they are performing the task alone, without the technology’s assistance. Reference Vicente and Matute16 Some users attribute more authority to an automated tool than to other sources of advice. This over-reliance on technology, referred to as automation bias, Reference Parasuraman and Manzey17,Reference Goddard, Roudsari and Wyatt18 has been reported in diverse areas of medicine including clinical diagnosis, electronic prescribing and ECG interpretation. Reference Wang, Gao and Agarwal19Reference Lyell, Magrabi, Raban, Pont, Baysari and Day22 There is also concern that if basic tasks such as interview skills are performed by artificial intelligence, physicians may not develop advanced skills to the same extent. Reference Aquino, Rogers, Braunack-Mayer, Frazer, Win and Houssami23 As more cognitive tasks are shifted to computer sources, it is important to recognise the limitations of the technology. If operators mistakenly believe that technology can solve any problem, they may stop learning the procedural knowledge. As learned in the aviation industry, this mistaken belief is of particular concern in a crisis when technological solutions are unavailable and human problem-solving skills are required to solve the calamity. Reference Kayes and Yoon24 The frequent use of convenient artificial intelligence tools may have a negative impact on critical thinking abilities, mediated by cognitive offloading. This offloading refers to the process of using physical actions, external tools or resources to reduce the cognitive load while performing tasks. Reference Risko and Gilbert25 Artificial intelligence tools allow individuals to manage complex information by relying on technology, enabling them to allocate their mental resources more efficiently, but this may decrease involvement in deep, reflective thinking. Reference Gerlich26

Challenges of using artificial intelligence systems

The performance level for an artificial intelligence product is tied to the training data. There are many, diverse challenges related to the training data used in medical artificial intelligence applications including missing data, inaccuracy, coding errors, biases and redundancies. Reference Monteith, Glenn, Geddes, Whybrow, Achtyes and Bauer20 As time passes, the training data may not reflect current practice standards and treatments. There must be recognition of the fundamental, ongoing need for continuous data quality improvement and monitoring of these products. Reference Jarrahi, Memariani and Guha27

There is disagreement on how much physician reliance on technology, including artificial intelligence is contributing to deskilling. Over-reliance on this technology can reduce the quality of medical decision-making, and clinical reasoning, and negatively impact communications with patients. Physicians may prioritise interactions with the technology over communications with the patient. As time passes and the use of artificial intelligence increases, as in pathology, there is concern that there will not be sufficient numbers of physicians who can advance the field and train the next generation. Reference Nakagawa, Moukheiber, Celi, Patel, Mahmood and Gondim28 It is expected that humans will outsource more decision-making to the technology as the performance of these tools improves over time. Physicians may be unaware that with increasing reliance on artificial intelligence, skill decay may begin quickly. Reference Dinerstein29,Reference Connolly, Horn and Loewenstein30 There needs to be more investigation into which tasks would be appropriate for the use of these tools in medicine, and how best to design and implement collaborative, patient-centred approaches. Reference Heudel, Crochet and Blay31 It is key that the use of artificial intelligence tools does not negatively impact the doctor–patient relationship. There is also concern that as the use of the technology increases some physicians may feel their skills and knowledge are less valued, leading to decreased job satisfaction. Reference Nakagawa, Moukheiber, Celi, Patel, Mahmood and Gondim28 However, expanding the use of these tools for administrative and clerical tasks may increase the time available for clinical duties.

Limitations

Most commercial algorithms including artificial intelligence are proprietary, and the implications of this were not discussed. Reference Thiele7 This paper did not address upskilling associated with the technology. For example, artificial intelligence tools may improve productivity for junior workers, as with coding for medical records. Reference Wang, Gao and Agarwal19 Insurance issues related to medical testing with these products, and artificial intelligence related malpractice were omitted. Strategies to mitigate the deskilling impacts of the technology were not discussed as they are complex and unique to the application and setting. Issues related to patient privacy, and the complex regulatory, technical and accountability concerns related to the continuously changing nature of artificial intelligence products were not included. Reference Aquino, Rogers, Jacobson, Richards, Houssami and Woode32,Reference Schneier and Sanders33 Cybersecurity issues including data poisoning and corruption of artificial intelligence training data were omitted. Reference Chen, Zhang, Zhang, Wang and Liu34

Conclusion

Artificial intelligence tools are increasingly and routinely being used in medicine. As the use of this technology becomes a routine part of medical treatment, it is imperative that physicians recognise the limitations of artificial intelligence tools, and continue to rely on and grow their knowledge, experience and clinical skills. Artificial intelligence tools may assist with basic, administrative tasks in medicine, but cannot replace the uniquely human interpersonal and reasoning skills of physicians. Physicians must prevent deskilling, and learn how best to use artificial intelligence tools in clinical medicine.

Data availability

Data availability is not applicable to this article as no new data were created or analysed in this study.

Author contributions

S.M. and T.G. wrote the initial draft. S.M., T.G., J.R.G., P.C.W., E.D.A., R.B. and M.B. edited, reviewed and approved the final manuscript.

Funding

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Declaration of interest

J.R.G., director of the National Institute for Health and Care Research Oxford Health Biomedical Research Centre, is a member of the BJPsych editorial board and did not take part in the review or decision-making process of this paper.

References

American Medical Association. AMA Augmented Intelligence Research. AMA, 2025 (https://www.ama-assn.org/system/files/physician-ai-sentiment-report.pdf).Google Scholar
Benjamens, S, Dhunnoo, P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med 2020; 3: 118.10.1038/s41746-020-00324-0CrossRefGoogle ScholarPubMed
Le Lagadec D, Kornhaber R, Cleary M. Navigating the impact of artificial intelligence on our healthcare workforce. J Clin Nurs 2024; 33: 2369–70.10.1111/jocn.17191CrossRefGoogle Scholar
Lee, H-P, Sarkar, A, Tankelevitch, L, Drosos, I, Rintel, S, Banks, R, et al. The impact of generative AI on critical thinking: self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan, 26 Apr–1 May 2025). Association for Computing Machinery, 2025.Google Scholar
Wartman, SA, Densen, P. Will Artificial Intelligence Undermine the Profession of Medicine? The Pharos, 2024 (https://www.alphaomegaalpha.org/wp-content/uploads/2024/11/pp2-8-Editorial_Final_AUT24.pdf).Google Scholar
Thiele, LP (ed.). Deskilling: the atrophy of cognitive and social aptitudes. In: Human Agency, Artificial Intelligence, and the Attention Economy: 113–52. Palgrave Macmillan, 2025.10.1007/978-3-031-82086-1_4CrossRefGoogle Scholar
Verghese, A. Culture shock—patient as icon, icon as patient. N Engl J Med 2008; 359: 2748–51.10.1056/NEJMp0807461CrossRefGoogle ScholarPubMed
Lewin, S, Chetty, R, Ihdayhid, AR, Dwivedi, G. Ethical challenges and opportunities in applying artificial intelligence to cardiovascular medicine. Can J Cardiol 2024; 40: 18971906.Google ScholarPubMed
Hoff, T. Deskilling and adaptation among primary care physicians using two work innovations. Health Care Manage Rev 2011; 36: 338–48.10.1097/HMR.0b013e31821826a1CrossRefGoogle ScholarPubMed
Monteith, S, Glenn, T, Geddes, JR, Achtyes, ED, Whybrow, PC, Bauer, M. Challenges and ethical considerations to successfully implement artificial intelligence in clinical medicine and neuroscience: a narrative review. Pharmacopsychiatry 2023; 56: 209–13.Google ScholarPubMed
Cabitza, F, Rasoini, R, Gensini, GF. Unintended consequences of machine learning in medicine. JAMA 2017; 318: 517–8.10.1001/jama.2017.7797CrossRefGoogle ScholarPubMed
Mongan, J, Kohli, M. Artificial intelligence and human life: five lessons for radiology from the 737 MAX disasters. Radiol Artif Intell 2020; 2: e190111.10.1148/ryai.2020190111CrossRefGoogle ScholarPubMed
Crowston, K, Bolici, F. Deskilling and upskilling with AI systems. Inf Res Int Electr J 2025; 30: 1009–23.Google Scholar
Nilsen, P, Sundemo, D, Heintz, F, Neher, M, Nygren, J, Svedberg, P, et al. Towards evidence-based practice 2.0: leveraging artificial intelligence in healthcare. Front Health Serv 2024; 4: 1368030.10.3389/frhs.2024.1368030CrossRefGoogle ScholarPubMed
Vicente, L, Matute, H. Humans inherit artificial intelligence biases. Sci Rep 2023; 13: 15737.10.1038/s41598-023-42384-8CrossRefGoogle ScholarPubMed
Parasuraman, R, Manzey, DH. Complacency and bias in human use of automation: an attentional integration. Human Factors 2010; 52: 381410.10.1177/0018720810376055CrossRefGoogle Scholar
Goddard, K, Roudsari, A, Wyatt, JC. Automation bias: a systematic review of frequency, effect mediators, and mitigators. J Am Med Inform Assoc 2012; 19: 121–7.10.1136/amiajnl-2011-000089CrossRefGoogle Scholar
Wang, W, Gao, G, Agarwal, R. Friend or foe? Teaming between artificial intelligence and workers with variation in experience. Manage Sci 2024; 70: 5753–75.Google Scholar
Monteith, S, Glenn, T, Geddes, J, Whybrow, PC, Achtyes, E, Bauer, M. Expectations for artificial intelligence (AI) in psychiatry. Curr Psychiatry Rep 2022; 24: 709–21.10.1007/s11920-022-01378-5CrossRefGoogle ScholarPubMed
Bond, RR, Novotny, T, Andrsova, I, Koc, L, Sisakova, M, Finlay, D, et al. Automation bias in medicine: the influence of automated diagnoses on interpreter accuracy and uncertainty when reading electrocardiograms. J Electrocardiol 2018; 51: S611.10.1016/j.jelectrocard.2018.08.007CrossRefGoogle Scholar
Lyell, D, Magrabi, F, Raban, MZ, Pont, LG, Baysari, MT, Day, RO, et al. Automation bias in electronic prescribing. BMC Med Inform Decis Mak 2017; 17: 28.10.1186/s12911-017-0425-5CrossRefGoogle ScholarPubMed
Aquino, YSJ, Rogers, WA, Braunack-Mayer, A, Frazer, H, Win, KT, Houssami, N, et al. Utopia versus dystopia: professional perspectives on the impact of healthcare artificial intelligence on clinical roles and skills. Int J Med Inform 2023; 169: 104903.10.1016/j.ijmedinf.2022.104903CrossRefGoogle ScholarPubMed
Kayes, DC, Yoon, J. Cognitive offloading strategies and decrements in learning: lessons from aviation and aerospace crises. Hum Perf Extrem Environ 2022; 17: 2.Google Scholar
Risko, EF, Gilbert, SJ. Cognitive offloading. Trends Cogn Sci 2016; 20: 676–88.10.1016/j.tics.2016.07.002CrossRefGoogle ScholarPubMed
Gerlich, M. AI tools in society: impacts on cognitive offloading and the future of critical thinking. Societies 2025; 15: 6.10.3390/soc15010006CrossRefGoogle Scholar
Jarrahi, MH, Memariani, A, Guha, S. The principles of data-centric AI. Commun ACM 2023; 66: 8492.10.1145/3571724CrossRefGoogle Scholar
Nakagawa, K, Moukheiber, L, Celi, LA, Patel, M, Mahmood, F, Gondim, D, et al. AI in pathology: what could possibly go wrong? Semin Diagn Pathol 2023; 40: 100–8.10.1053/j.semdp.2023.02.006CrossRefGoogle ScholarPubMed
Dinerstein, C. When AI Takes Over: The Hidden Cost of Technological Progress. American Council on Science and Health, 2025 (https://www.acsh.org/news/2025/04/01/when-ai-takes-over-hidden-cost-technological-progress-49389).Google Scholar
Connolly, DJ, Horn, S, Loewenstein, G. Inaccurate beliefs about skill decay. SSRN [Preprint] 2024. Available from: https://doi.org/10.2139/ssrn.4916412.CrossRefGoogle Scholar
Heudel, PE, Crochet, H, Blay, JY. Impact of artificial intelligence in transforming the doctor–cancer patient relationship. ESMO Real World Data Digit Oncol 2024; 3: 100026.10.1016/j.esmorw.2024.100026CrossRefGoogle Scholar
Aquino, YSJ, Rogers, WA, Jacobson, SLS, Richards, B, Houssami, N, Woode, ME, et al. Defining change: exploring expert views about the regulatory challenges in adaptive artificial intelligence for healthcare. Health Policy Technol 2024; 13: 100892.10.1016/j.hlpt.2024.100892CrossRefGoogle Scholar
Schneier, B, Sanders, N. The AI Wars have Three Factions, and They All Crave Power. The New York Times, 2023 (https://www.nytimes.com/2023/09/28/opinion/ai-safety-ethics-effective.html).Google Scholar
Chen, J, Zhang, X, Zhang, R, Wang, C, Liu, L. De-pois: an attack-agnostic defense against data poisoning attacks. IEEE Trans Inf Forens Secur 2021; 16: 3412–25.10.1109/TIFS.2021.3080522CrossRefGoogle Scholar

This journal is not currently accepting new eletters.

eLetters

No eLetters have been published for this article.