Hostname: page-component-77f85d65b8-jkvpf Total loading time: 0 Render date: 2026-04-14T01:19:06.683Z Has data issue: false hasContentIssue false

Feedback of flattery: How AI may worsen narcissism in leadership training

Published online by Cambridge University Press:  31 March 2026

Cody B. Cox*
Affiliation:
St. Mary’s University, San Antonio, TX, USA
Aaron J. Tay
Affiliation:
St. Mary’s University, San Antonio, TX, USA
*
Corresponding author: Cody B. Cox; Email: ccox9@stmarytx.edu
Rights & Permissions [Opens in a new window]

Abstract

Information

Type
Commentaries
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of Society for Industrial and Organizational Psychology

Introduction

The focal article (Mitchell et al., Reference Mitchell, Haslam, Burke and Steffens2026) presents the provocative suggestion that current practices both identifying and developing leaders promote individuals who demonstrate narcissistic traits. The authors discuss this trend in terms of how leaders are compensated and trained; in their discussion of training, however, the authors did not address the current shift in leadership training toward practices utilizing artificial intelligence (AI). Although current leadership training may already promote narcissism, multiple aspects of AI-assisted leadership training may inadvertently intensify this trend. However, if used carefully, we believe AI could be utilized to mitigate narcissistic tendencies.

Artificial intelligence is anticipated to “fundamentally transform how leadership abilities are developed” (Sposato & Dittmar, Reference Sposato and Dittmar2025; p. 2). In fact, the leadership development space appears to be already fairly well inundated with AI-driven or AI-enabled tools. Jenkins and Kanna (Reference Jenkins and Khanna2025) discussed AI-driven software like the Leadership Skills Lab and Lead with Monark that allow potential leaders to role-play in immersive leadership simulations. Likewise, CodeSignals Conversational Starter allows individuals to role-play difficult conversations, and Tenor provides a series of role-play scenarios for potential leaders to explore (Tank, Reference Tank2025). AI has also been utilized to speed up the rollout of workplace training, as LLMs are able to generate new training regiments faster than development teams have been able to previously (Maity, Reference Maity2019). Furthermore, AI has been increasingly used for individual independent development as well (Hartley et al., Reference Hartley, Hayak and Ko2024). A survey of online discussion of AI use from the Harvard Business Review showed that 31% of conversations mentioned using AI for personal and professional support, and 16% mentioned using it for educational purposes, suggesting that its use for self-directed learning in the workplace is becoming widespread (Zao-Sanders, Reference Zao-Sanders2025).

AI is gaining traction in coaching practices as well. A 2023 survey, for instance, reported that 55% of surveyed employers had implemented AI in learning and development (L&D) initiatives (Taylor & Vinauskaite, Reference Taylor and Vinauskaitė2023). AI offers several advantages for workplace training and development that can effectively augment a human coach. For example, AI can facilitate the development of personalized SMART goals and provide immediate support during acute stress situations, a time when a human coach may be unavailable. However, AI currently lacks the depth of human coaches in identifying root behavioral or systemic problems, as its analysis is restricted to the data explicitly provided by the trainee. AI may also fail to recognize and mitigate conflicting goals assigned to a trainee, making it less effective than a human coach in navigating nuanced goal-setting situations (Graßmann & Schermuly, Reference Graßmann and Schermuly2021).

Although the use of AI in training and development is potentially promising, it is important to note that the impact of AI on individuals is still not fully understood, and there are early indications that the use of AI can have a negative impact on human psychology. For example, researchers from the MIT Media Lab earlier this year published research suggesting that AI use over time is associated with decreases in brain activation when writing essays (Kosmyna et al., Reference Kosmyna, Hauptmann, Yuan, Situ, Liao, Beresnitzky, Braunstein and Maes2025). In fact, many features of AI training seem to promote the hyperpersonalization that the commentary authors discuss. For example, AI provides unique, tailored feedback to individual learners (Sposato & Dittmar, Reference Sposato and Dittmar2025), which may encourage the salience of the personal identity of leaders instead over their group identities. AI coaches also can adapt themselves to the language and idiosyncrasies of the learner, which may reinforce narcissistic beliefs (Graßmann & Schermuly, Reference Graßmann and Schermuly2021).

Our central concern with the use of AI-assisted leadership training is the tendency of AI to engage in sycophantic praise of the user. Sycophancy can be defined as the tendency of generative AI to provide responses that align with user beliefs rather than truthful responses (Sharma et al., Reference Sharma, Tong, Korbak, Duvenaud, Askell, Bowman, Cheng, Durmus, Hatfield-Dodds, Johnston, Kravec, Maxwell, McCandlish, Ndousse, Rausch, Schiefer, Yan, Zhang and Perez2025). Sycophancy often manifests as AI-generated responses that flatter or compliment the user (Morrin et al., Reference Morrin, Nicholls, Levin, Yiend, Iyengar, DelGuidice, Bhattacharya, Tognin, MacCabe, Twumasi, Alderson-Day and Pollak2025). A recent New York Times (August 12, 2025) article suggested that sycophancy was a consequence of the fact that AI is trained on human feedback and, as users like being praised, they often react favorably to flattering feedback. In fact, recently researchers have found that users generally prefer sycophantic feedback, even when that feedback is incorrect (Sharma et al., Reference Sharma, Tong, Korbak, Duvenaud, Askell, Bowman, Cheng, Durmus, Hatfield-Dodds, Johnston, Kravec, Maxwell, McCandlish, Ndousse, Rausch, Schiefer, Yan, Zhang and Perez2025).

Following the theoretical framework established in the focal article, we believe that AI sycophancy—in the form of flattery and compliments—would be particularly hyperpersonalizing and encourage greater narcissism. Researchers have recent evidence that flattery from AI chatbots can reinforce misguided and dangerous thoughts in individuals, leading to delusions of grandeur (Morrin et al., Reference Morrin, Nicholls, Levin, Yiend, Iyengar, DelGuidice, Bhattacharya, Tognin, MacCabe, Twumasi, Alderson-Day and Pollak2025). Leaders with narcissistic traits may be especially vulnerable to this effect, given the preference that people with narcissism already exhibit toward flattery (Gu & Watts, Reference Gu and Watts2021). More specifically, leaders who exhibit characteristics of vulnerable narcissism may be especially susceptible to the potential impact of AI (Gauglitz, Reference Gauglitz2021).

Beyond sycophancy, there is also evidence that narcissists are attracted to the individualized, empathetic feedback AI can provide. In one study of consumers’ preferences for voice AI, researchers hypothesized that narcissism would predict both satisfaction and utilization of an empathetic AI (Poushneh et al., Reference Poushneh, Vasquez-Parraga and Gearhart2024). They argued that as narcissists are lacking empathy themselves, they would be particularly responsive to AI that was expressing empathy. Their results supported their hypothesis, and they found that narcissism predicted attraction to and use of empathetic AI.

This attraction becomes concerning when paired with the greater availability and scalability that AI has compared to more conventional training methods (Sposato & Dittmar, Reference Sposato and Dittmar2025). Although the commentary authors suggest that allowing all employees access to leadership training may be depersonalizing and reduce narcissism, we would suggest that making AI coaching universally available in an organization might be detrimental for two reasons. First, if training were universally available, it would depend on the learner to utilize the training and narcissists—given their predilection for sycophantic and empathetic communication—may be the ones most inclined to utilize the training, which may reinforce their narcissism. Additionally, researchers have suggested that the fact that AI is always available may facilitate a dependency on AI such that leaders start to rely excessively on their AI coach (Graßmann & Schermuly, Reference Graßmann and Schermuly2021). Thus, although we agree with the authors’ suggestion for making leadership training more widely available, making AI-assisted leadership training may be potentially detrimental.

Although we are concerned that AI could enhance narcissistic traits in leadership training, it is important to note that the design of AI is quickly changing. With the release of GPT-5, OpenAI has taken steps to reduce the sycophancy of ChatGPT (OpenAI, 2025). Sposato and Dittmar (Reference Sposato and Dittmar2025) also note that the content that AI tools are trained on can be adapted, which suggests that designers could incorporate more depersonalizing content into the training. Perhaps researchers could also incorporate more collectivist leadership training into AI to avoid the more individualistic aspects of leadership development. For example, by focusing on how a leader’s decisions could impact their team and organization rather than how it could impact the leader, AI may be able to serve as a tool to rectify leader narcissism tendencies rather than worsen them.

To prevent the potential for increased narcissism within leadership development programs that use AI, organizations should implement several corrective strategies. First, organizations should permit a human training coach to access the AI chat logs. Although this necessitates careful navigation of data privacy concerns, access to these logs offers empirical insight into the AI’s effectiveness in personalizing training content, and it acts as an auditing mechanism to ensure the AI is not inadvertently reinforcing or encouraging narcissistic tendencies in the trainee. Second, organizations should consider strategically limiting the availability of AI-based coaching to designated periods. Although this negates the advantage of 24/7 access, it intentionally compels leaders to seek advice from human colleagues and personal contacts. This restriction ensures that AI functions as a valuable leadership development tool rather than becoming an exclusive dependency that isolates the leader (Graßmann & Schermuly, Reference Graßmann and Schermuly2021). Last, it is important to note that the evaluation and development of AI coaches is often measured primarily by user satisfaction rather than by demonstratable acquisition of new knowledge or skills (Urbancová et al., Reference Urbancová, Vrabcová, Hudáková and Petrů2021). In order to mitigate the dangers of AI in leadership development, the evaluation of AI-integrated training must shift beyond subjective satisfaction ratings. Although user satisfaction provides useful feedback, development decisions should be fundamentally tied to objective, measurable metrics of learning and behavior change. Linking training success to demonstrable outcomes ensures that AI is optimized to maximize learning effectiveness rather than merely catering to the users’ preferences or flattering their egos.

As AI is a relatively new technology that is fundamentally shifting the workplace, more research is needed in order to explore the impact of AI on both narcissism and training and development. Research is just beginning to explore how AI is impacting human psychology. Although AI could serve as a democratizing force by allowing leadership development to more people than ever, caution must be taken in order to ensure that it does not cause harm as well. If designed intentionally, AI could either reinforce the rise of narcissistic leaders or serve as a corrective force that reshapes leadership training toward collaboration.

References

Gauglitz, I. K. (2021). Different forms of narcissism and leadership. Zeitschrift Für Psychologie, 230(4), 321324. https://doi.org/10.1027/2151-2604/a000480 CrossRefGoogle Scholar
Graßmann, C., & Schermuly, C. C. (2021). Coaching with artificial intelligence: Concepts and capabilities. Human Resource Development Review, 20(1), 106126.10.1177/1534484320982891CrossRefGoogle Scholar
Gu, W., & Watts, L. L. (2021). Are narcissistic hiring managers more susceptible to candidate flattery? A within-subjects experimental simulation. Personality and Individual Differences, 177, 110803. https://doi.org/10.1016/j.paid.2021.110803 CrossRefGoogle Scholar
Hartley, K., Hayak, M., & Ko, U. H. (2024). Artificial intelligence supporting independent student learning: An evaluative case study of ChatGPT and learning to code. Education Sciences, 14(2), 120120. https://doi.org/10.3390/educsci14020120 CrossRefGoogle Scholar
Jenkins, D. & Khanna, G. (2025). AI enhanced training, education, & development: Exploration and insights into generative AI’s role in leadership learning. Zeitschrift Für Psychologie, 18(4). https://doi.org/10.1002/jls.70004 Google Scholar
Kosmyna, N., Hauptmann, E., Yuan, Y., Situ, J., Liao, X.-H., Beresnitzky, A., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. MIT Media Lab, https://doi.org/10.48550/arXiv.2506.08872 Google Scholar
Maity, S. (2019). Identifying opportunities for artificial intelligence in the evolution of training and development practices. [Evolution of training and development practices]. Journal of Management Development, 38(8), 651663. https://doi.org/10.1108/JMD-03-2019-0069 CrossRefGoogle Scholar
Mitchell, T., Haslam, S. A., Burke, V., & Steffens, N. (2026). Human me-sources or human we-sources? Exploring the capacity for human resource practices to stimulate or suppress leader narcissism. Industrial and Organizational Psychology, 19(1), 4260.Google Scholar
Morrin, H., Nicholls, L., Levin, M., Yiend, J., Iyengar, U., DelGuidice, F., Bhattacharya, S., Tognin, S., MacCabe, J., Twumasi, R., Alderson-Day, B., & Pollak, T. (2025). Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it). Working paper. https://doi.org/10.31234/osf.io/cmy7n_v5CrossRefGoogle Scholar
OpenAI (2025). Introducing GPT-5. Openai.com. Available at https://openai.com/index/introducing-gpt-5/ Google Scholar
Poushneh, A., Vasquez-Parraga, A., & Gearhart, R. S. (2024). The effect of empathetic response and consumers’ narcissism in voice-based artificial intelligence. Journal of Retailing and Consumer Services, 79, 103871.10.1016/j.jretconser.2024.103871CrossRefGoogle Scholar
Sharma, M., Tong, M., Korbak, T., Duvenaud, D., Askell, A., Bowman, S. R., Cheng, N., Durmus, E., Hatfield-Dodds, Z., Johnston, S. R., Kravec, S., Maxwell, T., McCandlish, S., Ndousse, K., Rausch, O., Schiefer, N., Yan, D., Zhang, M., & Perez, E. (2025). Towards understanding sycophancy in language models. In Proceedings of the 12th International Conference on Learning Representations (ICLR 2024). Virtual/Hybrid conference. Available at arXiv preprint arXiv:2310.13548.Google Scholar
Sposato, M., & Dittmar, E. C. (2024). Leadership training and development in the age of artificial intelligence. Development and Learning in Organizations: An International Journal, 38(4), 47.Google Scholar
Sposato, M., & Dittmar, E. C. (2025). The AI-powered future of digital transformation: enhancing organizations and leadership development. Journal of Work-Applied Management. https://doi.org/10.1108/JWAM-02-2025-0039 CrossRefGoogle Scholar
Tank, A. (2025). How AI is reshaping leadership training for today’s workplace. Fast Company, Inc, https://www.fastcompany.com/91317872/how-ai-is-reshaping-leadership-training-for-todays-workplace Google Scholar
Taylor, D. H., & Vinauskaitė, E. (2023). AI in L&D: The state of play. Global sentiment survey focus. Donald H. Taylor Research Base. Available at https://donaldhtaylor.co.uk/research_base/focus-on-ai-in-ld/ Google Scholar
Urbancová, H., Vrabcová, P., Hudáková, M., & Petrů, G. J. (2021). Effective training evaluation: The role of factors influencing the evaluation of effectiveness of employee training and development. Sustainability, 13(5), 2721.10.3390/su13052721CrossRefGoogle Scholar
Zao-Sanders, M. (2025). How people are really using gen AI in 2025. Harvard Business Review. available at: https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025 Google Scholar