This article examines the growing use of artificial intelligence (AI)-enabled decision support systems in targeting operations and their implications for proportionality assessments under international humanitarian law (IHL). Emphasizing the primacy of the duty of constant care and precautions in attack as obligations that must be exhausted before and during proportionality assessments, the article advocates for a fuller understanding of civilian harm. It traces the historical trajectory of “quantification logics” in targeting, from the Vietnam War to contemporary AI integration, and analyzes how such systems may reshape decision spaces, cognitive processes and accountability within the context of armed conflict. Specifically, the article argues that over-reliance on computational models risks displacing the contextual, qualitative judgement essential to lawful proportionality determinations, potentially normalizing civilian harm. It concludes with recommendations to preserve the space that human reasoning occupies as central to IHL compliance in targeting operations.