Anthropic v Pentagon: Judge Rules Against Trump Admin in AI Contract Dispute
- A federal judge in San Francisco has blocked the Trump administration from labeling artificial intelligence company Anthropic as a supply chain risk, delivering a significant preliminary injunction in...
- The decision halts an order by President Trump directing every federal agency to immediately cease all use of Anthropic's technology.
- Judge Lin's order bars the Trump administration from implementing, applying, or enforcing the president's directive.
A federal judge in San Francisco has blocked the Trump administration from labeling artificial intelligence company Anthropic as a supply chain risk, delivering a significant preliminary injunction in a high-profile legal battle over AI governance and government contracting. U.S. District Judge Rita Lin issued the ruling on March 26, 2026, preventing the Pentagon from enforcing a designation that would have cut off the firm from federal work.
The decision halts an order by President Trump directing every federal agency to immediately cease all use of Anthropic’s technology. Judge Lin’s 43-page opinion characterized the administration’s moves as potentially unlawful punitive measures that could cripple the company. The ruling marks an early win for Anthropic in a dispute that has drawn attention from across the technology sector and government policy circles.
Court Finds First Amendment Retaliation
Judge Lin’s order bars the Trump administration from implementing, applying, or enforcing the president’s directive. It also hampers the Pentagon’s efforts to designate Anthropic as a threat to U.S. National security. In the ruling, Lin wrote that the administration’s actions appeared to be retaliation for protected speech.

Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.
Judge Rita Lin
The judge noted that the government set out to publicly punish Anthropic for its ideology and rhetoric, as well as its arrogance for being unwilling to compromise certain beliefs. Lin described the administration’s moves as Orwellian, stating they could cripple the company. She concluded that Anthropic has shown that the broad punitive measures were likely unlawful and that the firm is suffering irreparable harm from them.
Lin stayed her order for seven days, giving the government an opportunity to appeal. A final verdict in the case could still be months away. The ruling does not stop the Trump administration from taking lawful actions that were allowed beforehand, meaning the government remains free to choose a different AI provider instead of Anthropic.
Dispute Origins and Social Media Escalation
The conflict intensified following public disagreements over AI safety guardrails. Anthropic had pushed to bar the military from using its Claude model for domestic surveillance or to power fully autonomous weapons. The Defense Department maintained it needed authority to use AI for all lawful purposes, noting existing restrictions were already in place against those particular uses.
According to court documents, the government used Anthropic’s Claude for much of 2025 without complaint. Defense employees accessed the technology through Palantir under terms that Anthropic co-founder Jared Kaplan said prohibited mass surveillance of Americans and lethal autonomous warfare. Disagreements began in earnest only when the government aimed to contract with Anthropic directly.
The situation escalated through social media posts from administration officials. President Trump’s post on Truth Social on February 27, 2026, referenced Leftwing nutjobs at Anthropic and directed every federal agency to stop using the company’s AI. Defense Secretary Pete Hegseth soon echoed this, stating he would direct the Pentagon to label Anthropic a supply chain risk.
Legal Process and Evidence Issues
Judge Lin’s opinion suggests the dispute did not need to reach such a frenzy because the government disregarded the existing process for how such disputes are governed. The court found that Hegseth did not complete the specific set of actions necessary to designate a company as a supply chain risk. Letters sent to congressional committees claimed less drastic steps were evaluated and deemed not possible, without providing further details.
The government also argued the designation was necessary because Anthropic could implement a kill switch in its technology. However, the judge wrote that the government’s lawyers later had to admit they had no evidence of that capability. Hegseth’s post stated that no contractor, supplier, or partner doing business with the United States military could conduct commercial activity with Anthropic.
The government’s own lawyers admitted on Tuesday that the Secretary doesn’t have the power to do that, and agreed with the judge that the statement had absolutely no legal effect at all.
Court Documents
The aggressive posts led the judge to conclude that Anthropic was on solid ground in complaining that its First Amendment rights were violated. The pattern of tweeting first and consulting lawyers later drew specific criticism from the bench. During a hearing on March 24, 2026, Lin pressed the government’s lawyers about why Anthropic was blacklisted, using even sharper language in the written order issued two days later.
Anthropic sued the administration to try to reverse the Defense Department’s decision to blacklist the company after contract talks fell apart. The company sought the injunction to pause those actions and prevent further monetary and reputational harm as the case unfolds. In a statement after the ruling, a spokesperson for Anthropic said they were grateful to the court for moving swiftly and pleased that the court agreed Anthropic is likely to succeed on the merits.
