Artificial intelligence directly influences how individuals allocate attention, regulate emotions and derive pleasure. Social media feeds, recommendation engines and generative companions are designed to optimise engagement through continuous behavioural feedback, creating environments in which reinforcement is personalised, adaptive and largely invisible to the user. These systems no longer merely reflect user preferences; they actively sculpt them through iterative reinforcement. Rather than focusing on excessive use of any single platform or activity, it may be clinically useful to consider how pervasive, cross-domain reinforcement architectures form an externalised reward ecology that can be described as an ‘algorithmic dopamine economy’.
Artificial-intelligence-driven platforms rely on behavioural telemetry, including clicks, pauses, scrolling speed and dwell time, to infer and reinforce micro-preferences. This feedback loop operates through repeated cycles of prediction and reinforcement, shaping engagement in ways that parallel established principles of reward learning. Reference Lewis, Florio, Punzo, Borrelli, Engmann and Brancaccio1 These processes have been plausibly linked to neural systems supporting motivation and habit formation, including reward-learning circuitry, although direct causal pathways remain incompletely established. Nonetheless, continuous algorithmic reinforcement is pervasive, subtle and opaque, and users are rarely aware that their reward prediction and error-correction patterns are being shaped externally. Emerging neuroimaging and psychometric studies have reported associations between intensive engagement with algorithmically personalised content and differences in reward sensitivity, prefrontal control and impulsivity. Reference Kudlow, Naylor and Abi-Jaoude2,Reference Pfeifer, Flannery, Cheng, Jorgensen, Prinstein and Lindquist3 Although causality remains uncertain, converging behavioural and neuroscientific findings raise clinically relevant questions about how sustained artificial-intelligence-mediated engagement interacts with executive control, particularly in adolescents and young adults with still-developing prefrontal circuitry. Reference Casey, Getz and Galvan4
Existing diagnostic frameworks, such as internet gaming disorder in DSM-5-TR and gaming disorder in ICD-11, capture only narrow manifestations of overuse. Yet the reinforcement architecture now extends across news consumption, short-form video, virtual companionship, productivity tools and health tracking. The psychiatric construct provisionally described here as algorithmic engagement dysregulation is intended as an environmental exposure framework rather than a new diagnosis, characterising maladaptive behavioural and affective patterns shaped by cross-platform algorithmic reinforcement. Clinically, patients report attentional fragmentation, offline anhedonia, irritability after disengagement and diminished intrinsic motivation in the context of intensive use of personalised feeds or recommendation systems. Unlike classical addictions, withdrawal is often affective rather than somatic, manifesting as restlessness, low mood or cognitive blunting when access is restricted, with rapid relief on re-engagement. Reference Shannon, Montgomery, Funk, Kamyabi, Hunt and Pope5 These features may overlap with anxiety, depressive, impulse control or attentional disorders and can coexist with them, complicating differential diagnosis; what suggests algorithmic exposure as a contributing factor is the specificity of dysphoria to disruption of personalised, adaptive and opaque reinforcement processes, rather than to digital use in general.
The underlying psychopathology probably reflects several interacting mechanisms. Repeated micro-reinforcement may produce tolerance-like effects, whereby greater algorithmic novelty or intensity is required to elicit comparable engagement, a phenomenon that can be understood as reward saturation. Continuous exposure to rapid content shifts may strain attentional networks, leading to cognitive fatigue and irritability. Personalised content often privileges emotionally salient material, which may reinforce affective extremes and disrupt mood regulation through processes akin to affective conditioning. Some individuals may increasingly rely on algorithmic environments for affect regulation, weakening intrinsic coping and reflective control; collectively, these processes may erode volitional regulation, foster impulsive engagement and attenuate natural reward responsiveness, although their specific contribution relative to pre-existing vulnerabilities remains unclear.
For psychiatry, algorithmic environments represent a novel category of psychosocial stressor with potential biological and behavioural consequences. Clinical assessment may therefore benefit from brief, structured questions about digital engagement, including which personalised platforms are used most, how patients feel before and after engaging with feeds, how attention shifts during use, and whether irritability or restlessness emerges when access is restricted. In therapeutic settings, cognitive–behavioural and mindfulness-based approaches can be adapted to address algorithmic cues, helping patients to identify conditioned triggers, restore attention control and rebuild intrinsic reward. Reference Lu, Qiu, Huang, Wang, Han and Zhu6 This may be complemented by psychoeducation about algorithmic design and deliberate changes to digital routines.
Public health responses to algorithmically mediated engagement will need to be extended beyond simple screen-time recommendations. At the same time, it is important to recognise that artificial-intelligence-mediated systems can also confer benefits, including social connection, access to information and adaptive support, and that vulnerability to harm is heterogeneous rather than universal. Reference Sala, Porcaro and Gómez7 Within this context, greater transparency around recommendation systems and education about digital choice architecture may support informed engagement without assuming pathology. Schools and workplaces could integrate psychoeducation on algorithmic influence, emphasising that digital environments are engineered rather than neutral, with psychiatry clarifying how reinforcement design affects attention, affect and self-regulation across populations.
At the policy level, psychiatrists can collaborate with technologists, ethicists and legislators to delineate boundaries between persuasive design and behavioural manipulation. Algorithmic governance that prioritises engagement over well-being risks turning collective attention into a tradable commodity with psychiatric cost. Concepts such as neuroethical auditing, in which algorithms that may alter affective states undergo psychiatric risk assessment before deployment, have been proposed as one way to anticipate psychological harm. Reference Putica, Khanna, Bosl, Saraf and Edgcomb8 Although these ideas remain conceptual, psychiatric insights into motivation, vulnerability and consent can shape broader discussions on responsible artificial intelligence design, without implying regulatory authority or immediate clinical application.
Empirical research should move from correlation to mechanism. Longitudinal studies combining ecological momentary assessment, passive digital phenotyping and neuroimaging could help to clarify temporal relationships among algorithmic exposure, reward processing and affect regulation. Reference Oudin, Maatoug, Bourla, Ferreri, Bonnot and Millet9 Translational work could also explore whether interventions that influence reward sensitivity or executive control can modify patterns of compulsive engagement with algorithmically mediated environments. Such possibilities remain exploratory, and their relevance to clinical practice is currently uncertain. Conceptual inquiry is also needed to determine whether algorithmic engagement dysregulation represents a discrete clinical entity or a dimensional vulnerability distributed across existing diagnostic categories.
Artificial intelligence has transformed attention into an engineered resource. In doing so, it has contributed to the emergence of an externalised reward environment that may recalibrate motivation, emotion and self-regulation. Recognition of algorithmic reinforcement as a potential psychiatric risk factor could improve prevention, assessment and intervention, as well as allowing psychiatry to engage with artificial intelligence not as an opponent but as a partner in aligning technological advances with psychological sustainability and human well-being.
Author contributions
All authors contributed to conceptualisation, literature review, and drafting and revision of the manuscript.
Funding
This study received no specific grant from any funding agency, commercial or not-for-profit sectors.
Declaration of interest
None.
eLetters
No eLetters have been published for this article.