Pentagon Ultimatum to AI Firm Anthropic Over Safety
The U.S. Department of Defense has issued an ultimatum to AI firm Anthropic, demanding unrestricted access to its models by Friday or facing $200 million in contract cuts. The Pentagon seeks to expand AI use beyond agreed terms, potentially for kinetic applications, while Anthropic maintains strict safety safeguards. This standoff highlights growing tensions between national security demands and ethical AI development.
Pentagon Pressures Anthropic on AI Safeguards, Threatens Contract Cuts
The U.S. Department of Defense has issued an ultimatum to leading artificial intelligence company Anthropic, demanding full access to its AI models by Friday or facing the potential loss of up to $200 million in government contracts. Defense Secretary Pete Hegs reportedly urged the AI firm to relinquish its safety protocols, which currently restrict military applications such as autonomous lethal targeting and surveillance. The Pentagon is even considering invoking the Defense Production Act to compel Anthropic to grant unrestricted military use of its technology on national security grounds.
Understanding the Pentagon’s Demands
The Pentagon’s stated goal is to utilize AI for “all lawful purposes” from a defense perspective. However, experts suggest this could signal a move beyond previously agreed-upon terms from last year, which stipulated that autonomous weapon systems would not be used without human control. Vanessa Vos, a researcher at the Bundeswehr University in Munich, explained that the Pentagon may be seeking to employ technologies that extend beyond intelligence, surveillance, and reconnaissance (ISR) into “kinetic uses.” This could potentially involve AI systems selecting and engaging targets with drones across various military domains without direct human operator control.
“So, it seems like the Pentagon wants to go beyond the actually agreed terms from last year which was set not to use autonomous weapon systems without human control. So basically it seems that now it wants to use those technologies that go beyond ISR purposes. So not only for intelligence, surveillance and reconnaissance purposes, but also kinetic uses.”
While the U.S. has a long-standing policy of employing autonomous weapon systems with “appropriate levels of human judgment,” the current situation raises concerns about the extent of this autonomy. Vos cautioned against expecting “Terminator scenarios where machines are fighting machines,” but acknowledged that the push is likely towards enhancing technologies with greater kinetic capabilities.
Concerns Over Military AI and Safety Protocols
The military’s interest in artificial intelligence for battlefield applications spans a range of technologies, including autonomous drone swarms, robotic systems, and cyber warfare capabilities. Anthropic, a company that emerged from a split with OpenAI, has implemented specific safeguards due to fundamental concerns about the development and control of advanced AI. These concerns are rooted in the potential for AI systems to evolve beyond their initial programming and intended functions.
Vos highlighted the critical need for armed forces to maintain control over their technologies. “When it comes to AI, they might sort of develop a life that goes beyond what was programmed and what was intended,” she stated. This underscores the ongoing international discussions regarding AI regulation, even in the absence of a formal treaty. The core issue remains ensuring appropriate human control, preventing machines from independently selecting and engaging targets, and preserving the military’s decision-making authority.
Anthropic’s Stance Amidst Competition
The Pentagon has reportedly been in negotiations with other major AI companies, including Google and OpenAI, regarding military applications. However, the current pressure appears to be uniquely focused on Anthropic. While the DoD signed agreements with these companies last summer, establishing certain “red lines,” Anthropic is the only one currently being threatened with being classified as a supply chain risk.
This suggests that Anthropic is distinguishing itself by its rigorous adherence to safety protocols. “It seems that Anthropic is leading in this regard,” Vos observed, noting that this focus on safety and reliability was a founding principle of the company. This distinct approach makes sense, as Anthropic’s separation from OpenAI was partly driven by a desire to prioritize safety and reliability. The fact that Anthropic has been included in classified U.S. materials, unlike other companies, may also play a role in the Pentagon’s intensified focus on their models.
“Therefore, it seems that Anthropic is leading in this regard and this is important for the DoD because so far this is actually the only company that has been included in classified US materials and the others are not.”
Looking Ahead: The Deadline and Future Implications
With the Friday deadline looming, the standoff between Anthropic and the Pentagon represents a critical juncture in the integration of advanced AI into military operations. Anthropic’s foundational commitment to safety and reliability makes it unlikely that the company will easily abandon its self-imposed restrictions under DoD pressure. Vos expressed skepticism, stating, “Personally I don’t think that they can actually go beyond those self-set red lines.” This suggests that further discussions and negotiations are likely, rather than an immediate capitulation by Anthropic.
The outcome of this dispute could have significant implications for the future development and deployment of AI in defense, setting precedents for how AI companies balance innovation with ethical considerations and safety imperatives. The Pentagon’s aggressive approach highlights the growing demand for cutting-edge AI capabilities, while Anthropic’s firm stance underscores the increasing importance of responsible AI development in an era of rapid technological advancement.
Source: What's behind the Anthropic-Pentagon dispute? | DW News (YouTube)





