AI in Warfare Risks Civilian Lives, Expert Warns
The use of AI in warfare, particularly for target identification, is raising serious concerns about increased civilian casualties due to accelerated decision-making and potential automation bias. Professor Elke Schwarz warns that current AI models are often inadequately tested and that the speed of AI operations can undermine crucial ethical obligations for mitigating harm to civilians.
AI’s Role in Modern Warfare Sparks Civilian Casualty Concerns
The integration of artificial intelligence into military operations, particularly in target identification and strike execution, is raising significant ethical questions and fears of increased civilian casualties. A recent report highlights the use of AI systems, such as Anthropic’s Claude, by the US and Israel in air strikes, with one system identifying 1,000 targets in just 24 hours. This technological leap, while promising speed and efficiency, presents profound moral dilemmas and concerns about the adequacy of human oversight in AI-driven decision-making.
Accelerated Targeting and Diminished Restraint
Professor Elke Schwarz, a lecturer in political theory at Queen Mary University of London, articulated these concerns, emphasizing the staggering speed at which AI models are being deployed in live conflicts. “The accelerated speed with which first of all the AI products and models are rolled out into live conflicts is quite staggering and then the accelerated speed with which that prioritizes the suggestions of targets and then the actioning of targets is also quite unusual and should us should give us a pause for thought,” Professor Schwarz stated.
She explained that for each target identified, parties to a conflict have a legal and moral obligation to verify its status, ensure civilians are not harmed, or at least mitigate civilian harm, and confirm the target’s legitimacy. “Doing that with that accelerated speed is is uh significantly curtailed in this environment,” she warned.
Upholding Ethical Obligations Amidst Technological Advancement
Professor Schwarz pushed back against the idea that the rapid pace of AI necessitates a re-evaluation or relaxation of these fundamental obligations. “No I don’t think that can be the case,” she asserted. “The obligations are there because war and violence and and conflict is a really horrible horrible state of affairs. It should not be the norm.” She stressed that the laws of war and armed conflict, with their centuries-long tradition, are founded on the principle of civilian harm mitigation.
“We should not we cannot adjust our laws and the obligations that we have to other humans uh in accordance with the technology that is you know being rolled out perhaps before it is sufficiently tested before it is perhaps sufficiently tried,” Professor Schwarz added. She pointed out that many AI models, including large language models like Claude, are relatively new and still in a “prototyping phase,” lacking the decades of testing and verification seen with established technologies.
Automation Bias and the Risk of Over-Reliance
The potential for AI to lead to a reduction in human intervention was underscored by a recent incident where a school was hit, resulting in numerous casualties, including children. While investigations are ongoing, it raises the possibility that AI may have “overstepped the mark” with insufficient human oversight. Professor Schwarz described this as a persistent risk that experts have warned about for years.
“When you have a human uh in this kind of accelerated action loop then there’s a danger that they can’t necessarily form a sufficiently robust cognitive picture of what was going on that they don’t necessarily always know the data that goes into these systems that then make a decision,” she explained. This can lead to “automation bias” and an “action bias,” where actions are taken before a human can effectively intervene. Furthermore, she noted the inherent risks associated with AI systems, such as outdated or inaccurate data, and the known tendency of large language models to “hallucinate.”
Potential Benefits and Broader Applications of AI in Defense
Despite these critical concerns, Professor Schwarz acknowledged that AI has a longer history in defense applications beyond just targeting systems. “AI in the military domain can be used for many other um actions,” she said. These include streamlining logistics, optimizing supply chains, facilitating translations, and enabling predictive maintenance. In these contexts, AI’s capabilities in speed, scale, and efficiency can be highly beneficial and often involve fewer moral quandaries when applied to objects rather than human lives.
However, when AI is involved in targeting, where human lives are at stake, the acceleration of the process risks normalizing it as a mere workflow. Professor Schwarz warned that the more actions involving humans are routinized, the greater the tendency to “abdicate your responsibility” and the more prevalent “dehumanization” becomes. This, she concluded, “disallow[s] to some degree for moral restraint in the use of force. And what we then see is of course an expansion of violence and force. Um and almost always in the context of warfare, civilians tend to bear the brunt of that.”
Looking Ahead: The Imperative for Responsible AI Deployment
The deployment of AI in warfare presents a complex challenge, balancing potential operational advantages with profound ethical and humanitarian risks. As these technologies continue to evolve and become more integrated into military strategies, the international community faces an urgent need to establish clear guidelines, robust oversight mechanisms, and a commitment to upholding international humanitarian law. The focus must remain on ensuring that technological advancement does not come at the cost of civilian lives or erode the fundamental principles of moral restraint in conflict. The development and adherence to rigorous testing, transparent data practices, and meaningful human control will be paramount in navigating this new era of AI-enabled warfare.
Source: Using AI In Warfare Could Increase Civilian Casualties | Professor Elke Schwarz (YouTube)





