Pentagon AI Use Sparks Controversy: “Woke” Ban vs. Military Need
The U.S. military reportedly used Anthropic's AI for strikes in Iran hours after President Trump banned its government use, citing it as "too woke." This paradox highlights the tension between national security needs and the administration's AI policies, raising concerns about future investment in the U.S. AI sector.
Pentagon Leverages AI Despite Presidential Ban
In a striking turn of events, the U.S. military reportedly utilized an artificial intelligence system developed by Anthropic for identifying targets during strikes in Iran. This operation occurred less than 12 hours after President Trump announced a ban on the government’s use of Anthropic’s AI, citing concerns that the technology was “too woke.” The Wall Street Journal, citing sources familiar with the matter, revealed that Anthropic’s AI chatbot, Claude, played a role in these military operations. This apparent contradiction—using a system deemed too controversial for federal government deployment—has raised significant questions about the administration’s AI policies and the military’s reliance on advanced technology.
The “Woke” Ban and its Implications
The ban, announced Friday afternoon, was framed as a response to Anthropic’s reluctance to be involved in mass domestic surveillance or fully autonomous weapons without complete confidence in the AI models. The directive aimed to phase out the use of Anthropic’s AI across the federal government within six months. This policy extended to any entity doing business with the government, requiring them to prove their systems did not incorporate Anthropic’s AI. The move suggested a broader administration stance against AI perceived as overly progressive or ethically unaligned with certain government priorities.
Military Operations Override Policy
Despite the presidential directive, the military’s continued use of Anthropic’s AI for critical operations, particularly in a conflict zone, highlights the perceived indispensable nature of the technology. Jamil Jaffer, founder of the National Security Institute, commented on the situation, stating, “It demonstrates how important Anthropic and its Claude system is to the government.” He elaborated that the system’s distribution to various commands globally for intelligence analysis and target identification underscores its value. “It’s obviously an important product. Obviously a huge dispute. It’s a little bit between the Department of War and the company itself,” Jaffer noted, referring to the public exchange between the company and the Department of Defense.
A New Discourse on AI Regulation
The situation presents a reversal of the typical debate surrounding AI regulation. Usually, the discussion centers on how governments should regulate the development and deployment of AI technologies. In this instance, Anthropic appears to be the entity attempting to restrict the use of its own technology, particularly by governments, due to ethical considerations. Jaffer observed, “Rather than the government regulating AI, it’s almost like the AI companies attempting to regulate.” He further explained that companies seeking to ensure their AI is trusted, safe, and secure are essentially building trust with users, which can lead to greater adoption and stickiness of their products. The tension arises when a company restricts use for lawful purposes, a scenario Jaffer likened to a defense contractor refusing to allow a fighter jet to be used for specific military actions.
Concerns Over Retaliation and Investment
Beyond the operational and regulatory paradox, there are significant concerns about the potential impact of the administration’s actions on the broader AI industry in the United States. President Trump’s personal criticism of Anthropic, labeling the company as “woke” and “radical,” along with attempts to blacklist it, has been described by some, including former AI advisor Dean Ball, as “attempted corporate murder.” Such actions could deter future investment and innovation in the U.S. AI sector, particularly at a time when the nation is in a technological competition with countries like China. Jaffer expressed agreement that threatening companies is problematic, stating, “It’s one thing to say, look, we’re going to pull all of our contracts, even across the government… But then the Secretary of Defense went further… where he said we’re going to threaten every company who’s used Anthropic and say you can’t use.” He cautioned that such broad threats could have legal and economic repercussions.
Legal and Future Implications
The dispute also raises complex legal questions, including whether the government can compel Anthropic to provide its AI models, potentially through authorities like the Defense Production Act. Jaffer noted that such threats could be legally questionable and may end up being litigated. The situation exemplifies the delicate balance the U.S. government must strike between leveraging cutting-edge AI for national security and respecting the ethical boundaries set by AI developers, all while fostering a competitive domestic industry. The ongoing tension between the Pentagon’s operational needs and the administration’s policy directives, coupled with the potential chilling effect on investment, makes this a critical juncture for America’s AI strategy.
Source: The Pentagon and AI: Inside America’s new war machine (YouTube)





