Pentagon Taps AI for War; OpenAI Steps In After Firm Refuses

The Pentagon's use of AI in military operations, including leveraging Anthropic's Claude and Palantir's Maven system, has sparked controversy. OpenAI's subsequent deal with the military after Anthropic's refusal has led to significant public backlash and a 'Quit GPT' movement.

56 minutes ago
5 min read

Pentagon Embraces AI in Warfare, Sparks Ethical Debate

The U.S. military has entered a new era of warfare, leveraging artificial intelligence for critical operations, a development that has sent ripples through the tech industry and raised significant ethical questions. Recent reports from The Wall Street Journal reveal that the Pentagon has utilized AI, specifically Anthropic’s Claude model, in high-profile operations, including the effort to capture Venezuelan President Nicolás Maduro and in conflicts involving Israel, Iran, and the United States. This marks a significant shift, with allies reportedly set to study these deployments.

Custom AI Models Powering Military Operations

The AI systems employed by the military are distinct from the consumer-facing versions. Anthropic, for instance, developed a custom model for national security purposes, running on dedicated hardware within classified data centers. This setup provides a level of performance and resource allocation orders of magnitude greater than what civilian users experience, where compute capacity is shared across millions. This concentrated power and specialized data create a military-grade AI that is fundamentally different and more capable than public offerings.

Despite the advanced capabilities, human oversight remains crucial. Experts like Paul Scharre, executive vice president at the Center for New American Security, emphasize that AI can still err, necessitating human verification for life-or-death decisions. Even Anthropic’s CEO, Dario Amodei, acknowledges the limitations, stating that the technology is not yet ready for fully autonomous weapon systems and that judgment is still needed to determine what models can reliably perform.

The ‘Maven’ System: AI’s Role in Target Prioritization

A prime example of AI in action is the ‘Maven’ smart system, developed by Palantir and powered by a custom version of Anthropic’s Claude AI. This system is designed to process vast amounts of classified data from satellites, surveillance feeds, and intelligence sources. In one instance, it was used to strike 1,000 targets in Iran within 24 hours. The system gathers data from diverse sources—including hacked traffic cameras—interprets it, and outputs actionable insights, such as precise location coordinates and target prioritization in real-time. Studies suggest such systems can dramatically increase operational efficiency, allowing a small team to perform the work of a much larger contingent.

Ethical Standoff: Anthropic vs. the Pentagon

The integration of AI into warfare took a dramatic turn when tensions arose between Anthropic and the U.S. government. The Pentagon sought to contract Anthropic for uses that included autonomous AI weapon control and mass surveillance of U.S. citizens through data collection. Anthropic refused these demands, citing ethical concerns. Specifically, the company sought safeguards against mass surveillance of Americans and against fully autonomous weapons operating without human oversight.

In response to Anthropic’s refusal to comply with the Pentagon’s broader demands, the U.S. government reportedly banned its agencies from using Anthropic’s tools, labeling the company a supply chain risk—a designation never before applied to an American firm. This move was met with significant controversy. The government’s interest in using Claude for surveillance reportedly involved analyzing bulk purchased data on Americans, including geolocation, web browsing history, and personal financial information.

OpenAI Steps In, Sparks Public Outcry

In a move that has been widely criticized, OpenAI, the creator of ChatGPT, stepped in to potentially take the deal that Anthropic rejected. Just hours after the fallout between Anthropic and the government, OpenAI CEO Sam Altman announced his company would partner with the military. While Altman stated that OpenAI’s agreement includes the same safety principles Anthropic advocated for—prohibiting domestic mass surveillance and ensuring human responsibility for the use of force—questions linger about the speed and nature of these discussions. The New York Times reported that OpenAI and the government had only been in discussions for about two days prior to the announcement, leading to skepticism about the robustness of the safeguards.

This decision triggered a significant public backlash, manifesting as the ‘Quit GPT’ movement. Hundreds of thousands, potentially millions, of users have reportedly unsubscribed from OpenAI services. Social media campaigns and protests highlight widespread discontent, with organizers claiming millions have taken action against the company’s partnership with the military. This situation starkly contrasts with OpenAI’s origins as a non-profit focused on benefiting humanity, leading many to question its current trajectory.

Why This Matters: The Future of AI and Surveillance

The implications of AI in warfare and surveillance are profound. The ability of AI to process vast datasets at unprecedented speeds raises concerns about the potential for pervasive government surveillance. The transcript highlights how data, once collected by private firms, can be analyzed by AI for purposes like building profiles on individuals, based on their location, personal information, and political affiliations. This capability, though potentially legal under current interpretations of laws like the Patriot Act, represents a significant shift in the balance between privacy and state power.

The article stresses the importance of individual and collective action. Recommendations include reducing one’s presence on data broker sites, as this data is a key input for AI-driven surveillance. Furthermore, advocating for new legislation and raising public awareness are presented as crucial steps to guide the development and deployment of AI technology responsibly. The spread of surveillance technologies, even beyond U.S. borders, as seen with Palantir’s installations in Australia and Meta’s facial recognition efforts with Ray-Ban glasses, underscores the global nature of these challenges.

Ultimately, the article questions who AI truly serves. While it offers convenience and productivity gains for individuals, its rapid integration into military and surveillance applications poses catastrophic risks if mishandled. The narrative serves as a call to awareness and action, urging readers to consider the direction of technological advancement and its potential impact on society.


Source: AI is Now Being Used in War (YouTube)

Written by

Joshua D. Ovidiu

I enjoy writing.

5,276 articles published
Leave a Comment