Pentagon Considers Cutting Ties with Anthropic Over AI Usage Restrictions
The Pentagon is weighing a potential break with artificial intelligence firm Anthropic due to the company’s reluctance to fully relinquish control over how its AI models are used by the U.S. Military, a senior administration official revealed on Saturday, .
The dispute centers on the Pentagon’s demand that four leading AI companies – including Anthropic, OpenAI, Google, and xAI – allow their tools to be utilized for “all lawful purposes.” This encompasses sensitive areas such as weapons development, intelligence gathering, and battlefield operations. Anthropic has resisted these terms, leading to months of stalled negotiations and growing frustration within the Department of Defense.
According to the official, Anthropic’s primary concerns revolve around limiting the use of its AI model, Claude, in two specific areas: the mass surveillance of American citizens and the development of fully autonomous weapons systems. However, the Pentagon argues that defining the boundaries of these restrictions presents significant practical challenges.
“There is considerable gray area around what would and wouldn’t fall into those categories,” the official explained. “It’s unworkable for the Pentagon to have to negotiate individual use-cases with Anthropic—or have Claude unexpectedly block certain applications.”
The possibility of ending the partnership is very real. “Everything’s on the table,” the official stated, adding that any transition would need to be carefully managed. “But there’ll have to be an orderly replacement [for] them, if we think that’s the right answer.”
Anthropic maintains its commitment to supporting U.S. National security. A company spokesperson stated that conversations with the U.S. Government have focused on specific usage policy questions, including hard limits around fully autonomous weapons and mass domestic surveillance, but not on current operations. “Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy,” the spokesperson said.
The tensions between the Pentagon and Anthropic recently surfaced during a military operation to capture Venezuelan President Nicolás Maduro. According to the administration official, an executive at Anthropic contacted an executive at Palantir, an AI software firm involved in the operation, to inquire whether Claude had been used in the raid. The official interpreted this inquiry as a potential expression of disapproval, given that the operation involved the use of force.
“It was raised in such a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid, people were shot,” the official said.
However, Anthropic disputes this characterization. The spokesperson flatly denied that the company discussed the use of Claude for specific operations with the Department of War or any industry partners outside of routine technical discussions.
Beyond the Maduro incident, the official described a broader “culture clash” with Anthropic, characterizing the company as the most “ideological” of the AI labs regarding the potential risks of the technology. Despite this, the official acknowledged that replacing Claude would be difficult, as “the other model companies are just behind” in terms of specialized government applications.
Anthropic signed a contract with the Pentagon last summer valued at up to $200 million. Claude was also the first AI model granted access to the Pentagon’s classified networks. OpenAI’s ChatGPT, Google’s Gemini, and xAI’s Grok are currently used in unclassified settings, and all three have agreed to remove the restrictions applied to general users for their work with the Pentagon.
The Pentagon is currently negotiating with these three companies to extend their access to classified networks, insisting on the “all lawful purposes” standard for both classified and unclassified applications. The official indicated that one of the three companies has already agreed to these terms, while the other two are demonstrating greater flexibility than Anthropic.
A source familiar with internal dynamics at Anthropic suggests that the company also faces internal debate among its engineers regarding collaboration with the Pentagon. CEO Dario Amodei has publicly expressed concerns about the potential dangers of unchecked AI development.
Despite the ongoing dispute, the Anthropic spokesperson reiterated the company’s commitment to the national security space. “That’s why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers.”
