AI Powers Military Strikes: Accuracy and Ethical Alarms Sounded
Artificial intelligence is significantly enhancing military targeting accuracy, but raises serious ethical concerns, particularly regarding potential misidentification of civilian sites. Tech companies are grappling with their role in developing AI for warfare, while experts warn of an accelerating, potentially uncontrollable, arms race in AI.
AI Powers Military Strikes: Accuracy and Ethical Alarms Sounded
In the wake of successful strikes against Iranian targets, former President Donald Trump lauded the American armed forces, attributing their efficacy to superior equipment and personnel. However, a deeper examination reveals a significant technological advancement underpinning these operations: artificial intelligence. Professor Anthony King, Director of the Strategy and Security Institute at the University of Exeter and author of “AI, Automation and War: The Rise of a Military Tech Complex,” shed light on how AI is transforming modern warfare, while also raising critical ethical concerns.
AI’s Role in Target Identification and Processing
Professor King explained that both the U.S. and Israel have become pioneers in leveraging artificial intelligence to process the immense volume of data generated by modern battlefields. “There are hundreds, thousands of sensors across every battle space now from satellite to open source to signal intelligence to ground sensors,” King stated. “What the Israelis and especially the U.S. have been pioneers in is taking all of that mass of data and then training certain AI algorithms, certain AI models to process that massive data in order to identify by enemy signatures in it.”
This AI-driven intelligence has been observed in recent conflicts. The U.S. Army’s 18th Airborne Corps developed a sophisticated system in 2022 that reportedly enabled Ukrainian forces to identify Russian headquarters. More controversially, Israel’s use of systems like “Lavender” and “Gospel” in Gaza has been noted. These systems are designed to identify targets and ensure weapons reach their intended destinations, but they are not without their critics.
Concerns Over Accidental Targeting and AI Blunders
One of the most pressing concerns highlighted by Professor King relates to the potential for AI to misidentify targets, leading to tragic errors. He cited a specific instance in the current conflict where there are “concerns that the targeting of the girls’ school may have been accidental because it used to form part of an IRGC base.”
However, King also contextualized these concerns by comparing them to historical precedents. “Mistaken bombing attacks are absolutely an inevitable tragic element of all military operations,” he argued, referencing the 1991 Gulf War’s mistaken bombing of an Iraqi shelter that killed 300 people. “Mistakes in targeting are absolutely a perennial problem, a perennial tragedy, a perennial outrage of war. It is not distinct to AI.”
Despite these historical parallels, King emphasized that the targeting in recent operations, particularly in Iran, has been “remarkably accurate,” especially in dynamic situations like “decapitation operations” where leaders are targeted in their offices or bunkers.
Ethical Dilemmas and Corporate Responsibility
A significant portion of the discussion revolved around the ethical responsibilities of AI companies involved in military applications. The dispute between AI firm Anthropic and the U.S. government serves as a prime example. Anthropic has expressed concerns about its generative AI model, “Claude,” potentially being used for the surveillance of U.S. citizens and for autonomous lethal weapon systems.
Professor King described the situation as an “extraordinary argument” where the U.S. Secretary of Defense, in response to Anthropic’s ethical reservations, labeled the company a “supply chain risk.” This highlights a growing tension between the tech sector’s ethical considerations and the Pentagon’s drive for advanced military capabilities.
“Over the last 5 years in the U.S., Silicon Valley and the tech sector has moved very close to government,” King observed. “But yes, there are significant issues and actually there are significant ethical and political issues about the involvement of tech companies within military operations.”
The Unpredictability and Future of AI in Warfare
The conversation also touched upon the perceived unpredictability of AI and its potential to escalate conflicts, even to the point of considering nuclear options. Professor King, however, offered a more grounded perspective.
“No power is going to apply AI to a nuclear release. AI is a supporting decision, decision support function. It helps commanders to accelerate and improve their decision making. It allows them to see across the battle space much more clearly so that they can target and plan better.”
He cautioned against what he termed “fantastical” notions of AI fundamentally taking over military operations. Instead, King stressed that the key issues lie closer to home: “Where is the data coming from? Is the data accurate? How do generals use AI, utilize AI to make targeting decisions in which their opponents and of course civilians will or may be killed?”
The debate between Anthropic and the Pentagon, however, underscores a fundamental fear: the potential for the autonomization of decisions, removing human input from the loop. This is particularly concerning when considering the immense damage that can be inflicted without nuclear weapons, if such actions are taken without human oversight.
The Accelerating Pace of Military AI Development
The lucrative nature of the defense AI space, coupled with rapid advancements seen in conflicts like Ukraine (particularly in drone technology), suggests an accelerating arms race in artificial intelligence. Professor King noted that the AI being developed and utilized by the Pentagon is likely far more advanced than what is publicly available commercially.
“War always turbocharges technology,” he remarked. “That’s why you’ve got planes. That’s why we got what I was trying to say.” This dynamic implies that AI capabilities will continue to advance rapidly, potentially outpacing our ability to fully comprehend or control their implications. The concerns raised by companies like Anthropic might stem from their foresight into a direction of travel that, once set, becomes exceedingly difficult to alter.
Looking Ahead
The increasing integration of AI into military operations presents a complex landscape of enhanced capabilities and profound ethical challenges. As nations continue to explore the strategic advantages of AI in targeting and intelligence, the debate surrounding accountability, transparency, and the potential for unintended consequences will undoubtedly intensify. The coming months will be crucial in observing how governments and AI developers navigate these critical issues, particularly in light of ongoing global security concerns and the rapid evolution of artificial intelligence technology.
Source: Concerns AI Blunder May Have Led To Strike On Iranian Girls School | Anthony King (YouTube)





