Artificial intelligence is increasingly being used in medical practice for visit documentation, treatment plans and discharge summaries, 1 which are all tasks that were previously completed by the physician. There has been a rapid growth of FDA-approved artificial intelligence devices in medicine, with 1016 FDA-approved devices as of March 2025, 2 an increase from 64 devices in 2020. Reference Benjamens and Dhunnoo3 With the increased use of this facility in medicine for not only administrative but clinical tasks, there is concern that some physicians could become overly reliant on this tool. Every physician brings their unique background and knowledge to the exam room. When physicians rely on artificial intelligence, patients may no longer benefit from this experience. As physicians increasingly trust and rely on the recommendations of this tool in medical practice, the potential for the deskilling of physicians exists. The purpose of this paper is to discuss the risks of physician deskilling based on increasing reliance on artificial intelligence.
Growth of artificial intelligence
Diverse factors are contributing to the rapid growth of artificial intelligence technology in medicine. This tool is being used to provide skills that are missing due to shortages in the healthcare workforce. 4 The technology decreases the time spent on relevant information gathering. Reference Lee, Sarkar, Tankelevitch, Drosos, Rintel and Banks5 Clinical understanding of a patient often develops over multiple visits, and includes non-verbal cues, physical examination, overall appearance, body language, as well as spoken language. Reference Wartman and Densen6 These technologies function at superhuman speeds, providing services and solving tasks with very little human effort or skill required, and leading to widespread deskilling. Reference Thiele7 The use of artificial intelligence cannot replace the human aspect of the doctor–patient relationship. The communication skills of physicians may diminish when patient information can be accessed and shared without human interaction. Reference Verghese8
The adaption of artificial intelligence in routine medical care will require staff training at all experience levels. The technology may be used in medical school education to help teach clinical decision-making, including simulations for rare conditions. However, some feel that this technology cannot replicate the subtleties in disease presentation that clinicians must identify. Reference Lewin, Chetty, Ihdayhid and Dwivedi9 As time passes, many clinical staff will have no experience with the skills used to train the artificial intelligence automation.
Overreliance on artificial intelligence
Deskilling refers to a loss of knowledge, decision-making ability and autonomy to perform a specific task or job. Deskilling may occur after the implementation of new technologies, and in medicine may result in decreased clinical knowledge and confidence in clinical decision-making. Reference Hoff10 Staff spend increasing amounts of time comfortably using artificial intelligence products and tools, without understanding how these tools function. Reference Thiele7 However, high confidence in the ability of artificial intelligence to successfully perform a task may decrease the critical thinking of staff, diminishing problem-solving abilities, and shifting efforts to information verification. Reference Lee, Sarkar, Tankelevitch, Drosos, Rintel and Banks5 To ensure the reliability of artificial intelligence output, verification requires staff efforts including comparison with external sources and using their own knowledge. Reference Lee, Sarkar, Tankelevitch, Drosos, Rintel and Banks5 Less experienced clinicians may particularly rely on this technology, accepting its decisions without question. Without training to interpret and integrate the results, and recognise exceptions, staff may be unaware of the challenges and risks of artificial intelligence products in both routine and exceptional situations. Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer11 When automation requires fewer skills to complete a task, technology failures may lead to major disruptions or inefficiencies which may impair physician performance. Reference Cabitza, Rasoini and Gensini12 In safety-critical situations such as medicine, the introduction of artificial intelligence will lead to new, unanticipated safety risks, including new failure paths. Reference Mongan and Kohli13
Applications based on the technology are often described as having a levelling effect, helping novices more than experts. Reference Crowston and Bolici14 Along with human skill and experience levels, the design of an artificial intelligence product and implementation into the overall workflow will impact deskilling, and which staff are affected. Reference Crowston and Bolici14
Additional risks using artificial intelligence
The risks associated with these products, and the challenge of deskilling, may be more impactful for newly educated physicians who lack experience completing the tasks before automation, and may increase over time with a newer workforce. Reference Nilsen, Sundemo, Heintz, Neher, Nygren and Svedberg15 Additionally, physicians may continue to use biased recommendations from artificial intelligence decision support systems when they are performing the task alone, without the technology’s assistance. Reference Vicente and Matute16 Some users attribute more authority to an automated tool than to other sources of advice. This over-reliance on technology, referred to as automation bias, Reference Parasuraman and Manzey17,Reference Goddard, Roudsari and Wyatt18 has been reported in diverse areas of medicine including clinical diagnosis, electronic prescribing and ECG interpretation. Reference Wang, Gao and Agarwal19–Reference Lyell, Magrabi, Raban, Pont, Baysari and Day22 There is also concern that if basic tasks such as interview skills are performed by artificial intelligence, physicians may not develop advanced skills to the same extent. Reference Aquino, Rogers, Braunack-Mayer, Frazer, Win and Houssami23 As more cognitive tasks are shifted to computer sources, it is important to recognise the limitations of the technology. If operators mistakenly believe that technology can solve any problem, they may stop learning the procedural knowledge. As learned in the aviation industry, this mistaken belief is of particular concern in a crisis when technological solutions are unavailable and human problem-solving skills are required to solve the calamity. Reference Kayes and Yoon24 The frequent use of convenient artificial intelligence tools may have a negative impact on critical thinking abilities, mediated by cognitive offloading. This offloading refers to the process of using physical actions, external tools or resources to reduce the cognitive load while performing tasks. Reference Risko and Gilbert25 Artificial intelligence tools allow individuals to manage complex information by relying on technology, enabling them to allocate their mental resources more efficiently, but this may decrease involvement in deep, reflective thinking. Reference Gerlich26
Challenges of using artificial intelligence systems
The performance level for an artificial intelligence product is tied to the training data. There are many, diverse challenges related to the training data used in medical artificial intelligence applications including missing data, inaccuracy, coding errors, biases and redundancies. Reference Monteith, Glenn, Geddes, Whybrow, Achtyes and Bauer20 As time passes, the training data may not reflect current practice standards and treatments. There must be recognition of the fundamental, ongoing need for continuous data quality improvement and monitoring of these products. Reference Jarrahi, Memariani and Guha27
There is disagreement on how much physician reliance on technology, including artificial intelligence is contributing to deskilling. Over-reliance on this technology can reduce the quality of medical decision-making, and clinical reasoning, and negatively impact communications with patients. Physicians may prioritise interactions with the technology over communications with the patient. As time passes and the use of artificial intelligence increases, as in pathology, there is concern that there will not be sufficient numbers of physicians who can advance the field and train the next generation. Reference Nakagawa, Moukheiber, Celi, Patel, Mahmood and Gondim28 It is expected that humans will outsource more decision-making to the technology as the performance of these tools improves over time. Physicians may be unaware that with increasing reliance on artificial intelligence, skill decay may begin quickly. Reference Dinerstein29,Reference Connolly, Horn and Loewenstein30 There needs to be more investigation into which tasks would be appropriate for the use of these tools in medicine, and how best to design and implement collaborative, patient-centred approaches. Reference Heudel, Crochet and Blay31 It is key that the use of artificial intelligence tools does not negatively impact the doctor–patient relationship. There is also concern that as the use of the technology increases some physicians may feel their skills and knowledge are less valued, leading to decreased job satisfaction. Reference Nakagawa, Moukheiber, Celi, Patel, Mahmood and Gondim28 However, expanding the use of these tools for administrative and clerical tasks may increase the time available for clinical duties.
Limitations
Most commercial algorithms including artificial intelligence are proprietary, and the implications of this were not discussed. Reference Thiele7 This paper did not address upskilling associated with the technology. For example, artificial intelligence tools may improve productivity for junior workers, as with coding for medical records. Reference Wang, Gao and Agarwal19 Insurance issues related to medical testing with these products, and artificial intelligence related malpractice were omitted. Strategies to mitigate the deskilling impacts of the technology were not discussed as they are complex and unique to the application and setting. Issues related to patient privacy, and the complex regulatory, technical and accountability concerns related to the continuously changing nature of artificial intelligence products were not included. Reference Aquino, Rogers, Jacobson, Richards, Houssami and Woode32,Reference Schneier and Sanders33 Cybersecurity issues including data poisoning and corruption of artificial intelligence training data were omitted. Reference Chen, Zhang, Zhang, Wang and Liu34
Conclusion
Artificial intelligence tools are increasingly and routinely being used in medicine. As the use of this technology becomes a routine part of medical treatment, it is imperative that physicians recognise the limitations of artificial intelligence tools, and continue to rely on and grow their knowledge, experience and clinical skills. Artificial intelligence tools may assist with basic, administrative tasks in medicine, but cannot replace the uniquely human interpersonal and reasoning skills of physicians. Physicians must prevent deskilling, and learn how best to use artificial intelligence tools in clinical medicine.
Data availability
Data availability is not applicable to this article as no new data were created or analysed in this study.
Author contributions
S.M. and T.G. wrote the initial draft. S.M., T.G., J.R.G., P.C.W., E.D.A., R.B. and M.B. edited, reviewed and approved the final manuscript.
Funding
This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
Declaration of interest
J.R.G., director of the National Institute for Health and Care Research Oxford Health Biomedical Research Centre, is a member of the BJPsych editorial board and did not take part in the review or decision-making process of this paper.
eLetters
No eLetters have been published for this article.