In this article, we maintain that the anticipated integration of artificial intelligence (AI)-enabled systems into state-level decision making over whether and when to wage war will be accompanied by a hitherto neglected risk. Namely, the incorporation of such systems will engender subtle but significant changes to the state’s deliberative and organisational structures, its culture, and its capacities – and in ways that could undermine its adherence to international norms of restraint. In offering this provocation, we argue that the gradual proliferation and embeddedness of AI-enabled decision-support systems within the state – what we call the ‘phenomenon of “Borgs in the org”’ – will lead to four significant changes that, together, threaten to diminish the state’s crucial capacity for ‘institutional learning’. Specifically, the state’s reliance on AI-enabled decision-support systems in deliberations over war initiation will invite: (i) disrupted deliberative structures and chains of command; (ii) the occlusion of crucial steps in decision-making processes; (iii) institutionalised deference to computer-generated outputs; and (iv) future plans and trajectories that are overdetermined by past policies and actions. The resulting ‘institutional atrophy’ could, in turn, weaken the state’s responsiveness to external social cues and censure, thereby making the state less likely to engage with, internalise, and adhere to evolving international norms of restraint. As a collateral effect, this weakening could contribute to the decay of these norms themselves if such institutional atrophy were to become widespread within the society of states.