Integrating AI into military decision processes on the resort to force raises new moral challenges. A key question is: How can we assign responsibility in cases where AI systems shape the decision-making process on the resort to force? AI systems do not qualify as moral agents, and due to their opaqueness and the “problem of many hands,” responsibility for decisions made by a machine cannot be attributed to any one individual. To address this socio-technical responsibility gap, I suggest establishing “proxy responsibility” relations. Proxy responsibility means that an actor takes responsibility for the decisions made by another actor or synthetic agent who cannot be attributed with responsibility for their decisions. This article discusses the option to integrate an AI oversight body to establish proxy responsibility relations within decision-making processes regarding the resort to force. I argue that integrating an AI oversight body creates the preconditions necessary for attributing proxy responsibility to individuals.