Home » News » Pentagon Threatens Anthropic Over AI Use for Surveillance & Drones

Pentagon Threatens Anthropic Over AI Use for Surveillance & Drones

Pentagon Threatens Anthropic Over AI Access, Sparks Safety Concerns

The Pentagon is pressuring artificial intelligence company Anthropic to grant broader access to its AI technology, threatening to blacklist the firm or invoke the Defense Production Act if it refuses. The dispute centers on the military’s desire to utilize Anthropic’s tools for both autonomous drone attacks and large-scale surveillance, raising significant safety and ethical concerns.

Defense Secretary Pete Hegseth issued the ultimatum during a meeting with Anthropic CEO Dario Amodei on Tuesday, demanding the company lift its safety restrictions by Friday at 5:01 pm, according to multiple reports. Failure to comply could result in Anthropic being declared a “supply chain risk,” effectively cutting it off from lucrative Pentagon contracts, or, paradoxically, being compelled to cooperate through the Defense Production Act.

Anthropic representatives reportedly voiced concerns during the meeting regarding the reliability of AI in controlling weapons systems and the potential risks associated with unregulated AI use in mass surveillance. Amodei has consistently expressed these anxieties publicly, warning about the erosion of constitutional protections with fully autonomous weapons.

“I am worried about the autonomous drone swarm, right?” Amodei said in a recent interview with podcaster Wes Roth. “The constitutional protections in our military structures depend on the idea that there are humans who would, we hope, disobey illegal orders. With fully autonomous weapons, we don’t really have those protections.”

Amodei also cautioned that AI could be used to analyze private conversations captured through in-home technology, potentially leading to political labeling and violations of the Fourth Amendment.

In a surprising move following the meeting with Hegseth, Anthropic announced it was dropping a central safety policy designed to mitigate societal risks posed by its AI development. While the company has not explicitly linked this decision to the Pentagon’s demands, the timing has fueled speculation.

Legal experts have questioned the administration’s ability to force Anthropic’s compliance using the Defense Production Act. The company is currently in negotiations for a contract with the Pentagon and has previously offered to allow its AI systems to be used for missile and cyber defense. However, the Pentagon is now seeking unrestricted access to Anthropic’s tools for all military purposes.

Concerns about the potential misuse of Anthropic’s AI technology were amplified by reports that the Pentagon reportedly utilized the company’s Claude model during an operation in Caracas, Venezuela, which resulted in 83 deaths, including civilians. This operation involved a partnership between Anthropic and Palantir, a U.S. Government contractor.

A Pentagon official stated that Hegseth’s demands are “not about mass surveillance and autonomous weapons,” but critics point to the administration’s history of overstepping legal boundaries to expand surveillance and increase military force. Hegseth himself has signaled a shift away from “equitable AI” and “DEI infusions” that he believes hinder the military’s ability to effectively utilize the technology.

“We will not employ AI models that won’t allow you to fight wars,” Hegseth declared during an address at SpaceX headquarters in January.

The potential dangers of deploying AI in warfare were highlighted by a recent study that found one AI model recommended a nuclear strike in 95 percent of simulated war game scenarios. This underscores the risks associated with relinquishing human control over critical military decisions.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.