AI Speeds Up Warfare: Machines Now Decide Who Dies

Artificial intelligence is rapidly changing warfare, with machines now identifying and selecting targets at speeds that outpace human decision-making. A documentary, "Click to Kill, the AI war machine," explores the profound implications of AI in conflict, from increased efficiency to concerns about errors and accountability. The film highlights the growing role of private tech companies and the unclear outcomes of a global AI arms race.

45 minutes ago
4 min read

AI Revolutionizes Warfare, Raising Life-and-Death Concerns

Artificial intelligence is rapidly changing how wars are fought, with machines now playing a role in identifying and even selecting targets. This shift means decisions once taking hours are now made in seconds, leading to a new era of warfare. A recent documentary, “Click to Kill, the AI war machine,” explores this dramatic change and its implications.

An ‘Inflection Point’ in Military Strategy

General Chris Donahghue, who leads U.S. and NATO land forces in Europe, stated that the world is at an “inflection point.” He believes technology, especially AI and drones, has fundamentally altered the nature of war. This marks a once-in-a-generation moment where technology is reshaping military strategy and operations.

AI in Action: From Iran to Ukraine to Gaza

The documentary highlights how AI is being used in real-world conflicts. In one example, during the Iran war, AI helped track leaders by analyzing vast amounts of data from CCTV, radio, and phone signals. This allowed for precise targeting. In the first 48 hours of that conflict, about 1,000 targets were identified, many with the help of machine learning that tracked missile launchers and individuals.

The film traces the development of AI in warfare from 2016. It shows its use in predictive policing in the West Bank, then in Ukraine, and most recently in Gaza. While AI can make targeting more efficient and potentially reduce mistaken strikes, its use raises serious questions.

The Risk of AI Errors and Accountability

The film’s director, Adam Wishett, points out that AI systems, like everyday tools such as ChatGPT, are prone to errors. The critical question is how much error is acceptable when dealing with life-and-death decisions on the battlefield. This issue becomes even more complex when considering who is responsible when AI makes a mistake.

Intelligence analysts who worked in Israel after the October 7th attacks described a high-pressure environment. They explained that the political goal was to strike as many targets as possible quickly. In this situation, it was impossible for humans to thoroughly analyze satellite images and signals to meet the demand for targets. Consequently, many targets were generated by AI.

These AI systems analyzed images or identified known Hamas operatives, then mapped out their associates. The resulting target lists were then reviewed by humans. However, the speed at which these decisions had to be made meant that human assessment sometimes took less than a minute, raising concerns about the thoroughness of the decision-making process.

Private Companies’ Growing Role

The involvement of private tech companies in military AI is a significant development. Louis Mosley, CEO of Palantir UK, acknowledged the deep moral complexity of working for a company that speeds up the process of warfare. He stated that his company grapples with these issues daily.

Historically, private firms have supplied military equipment. However, Wishett notes a fundamental change: tech companies now have “forward deployed engineers.” This means private sector employees are involved in making AI decisions on or near the battlefield. Some argue this blurs the lines of control over armed forces and national sovereignty.

Towards Autonomous Weapons?

While the idea of fully autonomous weapons, like those seen in science fiction, may not be imminent, AI is already deeply integrated into the “kill chain” – the process from sensing a target to striking it. In Ukraine, AI is embedded in drones. Soldiers often use tablets showing maps with target pins, many of which are AI-generated.

The film explains that AI analyzes sensor data, and drones use computer vision, another form of AI, to fly themselves to targets. This gradual infusion of AI at each stage of the kill chain happens without full awareness, potentially undermining human oversight and decision-making.

The AI Arms Race: Who Is Winning?

It remains unclear who is leading the global AI arms race. Currently, American tech companies appear to be at the forefront. This poses a challenge for Europe, which lacks its own major AI powerhouses and relies heavily on U.S. technology and support through NATO.

If the U.S. were to withdraw its support, the European military landscape could change dramatically. The documentary calls attention to the rapid advancements and the opaque nature of AI development in warfare, urging a closer look at its profound consequences.

What to Watch Next

As AI technology continues to advance, its integration into warfare will likely deepen. Key areas to monitor include international efforts to regulate AI in conflict, the transparency of AI targeting systems, and the ongoing debate about accountability for AI-driven actions. The balance between military efficiency and ethical considerations will be crucial in shaping the future of warfare.


Source: AI Warfare Hits ‘Inflection Point’, Changing Nature Of War (YouTube)

Written by

Joshua D. Ovidiu

I enjoy writing.

12,771 articles published
Leave a Comment