Skip to main content
News Directory 3
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Menu
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Anthropic Rolls Back AI Safety Policy Amid Pentagon Pressure

Anthropic Rolls Back AI Safety Policy Amid Pentagon Pressure

February 25, 2026 Lisa Park - Tech Editor Tech

Anthropic, the AI company that built its reputation on prioritizing safety, is revising its core safety policy, a move that comes amidst a contentious standoff with the Pentagon over access to its AI models. The company announced on Tuesday that it is moving away from a strict “Responsible Scaling Policy” (RSP) that previously stipulated halting the training of more powerful models if their capabilities outstripped the company’s ability to ensure their safety. Instead, Anthropic is adopting a more flexible, nonbinding framework.

The original RSP, introduced in 2023, was a key differentiator for Anthropic, positioning it as a responsible actor in a rapidly evolving AI landscape. The policy stated, according to a 2024 version of the document, that the company “will not train or deploy models capable of causing catastrophic harm unless we have implemented safety and security measures that will keep risks below acceptable levels.” Now, Anthropic argues that this rigid stance could hinder its ability to compete with rivals who are pursuing AI development with fewer self-imposed constraints.

“We felt that it wouldn’t actually help anyone for us to stop training AI models,” Jared Kaplan, Anthropic’s chief science officer, told TIME. “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.” The company now intends to operate with “nonbinding but publicly-declared” safety goals, emphasizing transparency but removing the firm internal standards that previously governed its development process.

This shift in policy is occurring simultaneously with a high-stakes negotiation with the Pentagon. Defense Secretary Pete Hegseth issued an ultimatum to Anthropic on Tuesday, demanding the company roll back its AI safeguards or risk losing a $200 million government contract and potentially being placed on a government blacklist. The Pentagon is seeking broader access to Anthropic’s models for military applications.

Anthropic maintains two primary red lines regarding the military use of its technology: preventing its models from being used for mass domestic surveillance and avoiding the development of fully autonomous weapons systems. However, the Pentagon appears unwilling to fully accept these limitations. Reportedly, the Pentagon inquired in December whether Anthropic would permit its model to autonomously launch missiles to intercept incoming threats. According to Semafor, Anthropic responded by requesting further consultation before proceeding with such an application, but indicated a willingness to consider a carveout for missile defense systems.

While Anthropic has stated that the change to its RSP is unrelated to the Pentagon negotiations, the timing raises questions about the company’s motivations. The company told the Wall Street Journal that the policy update was driven by the realization that its initial safeguards were not being widely adopted by the industry, rendering them less effective. Anthropic argues that its original policy was intended to foster industry-wide consensus on AI safety, but that other companies have not followed suit.

The revised policy acknowledges that the original RSP may have been overly restrictive, hindering Anthropic’s ability to compete. The company now believes that a more flexible approach, coupled with increased transparency, will allow it to continue developing advanced AI models while still maintaining a commitment to safety. However, critics may argue that removing firm internal standards weakens Anthropic’s commitment to responsible AI development, particularly given the pressures it faces from the Pentagon.

The situation highlights the growing tension between the desire for innovation in AI and the need for robust safety measures. Anthropic’s decision reflects the complex challenges facing AI companies as they navigate a rapidly evolving technological landscape and increasing scrutiny from governments and the public. The company’s willingness to compromise on its initial safety pledge raises concerns about the potential for prioritizing commercial interests over ethical considerations, especially in the context of military applications.

The outcome of the negotiations with the Pentagon remains uncertain. However, Anthropic’s revised policy suggests a willingness to accommodate some of the military’s demands, potentially paving the way for a deal that would grant the Pentagon greater access to its AI technology. The company did not respond to a request for comment prior to publication.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

AI safety, Anthropic, artificial intelligence, claude, Dario Amodei, Department of Defense, Pete Hegseth

Search:

News Directory 3

ByoDirectory is a comprehensive directory of businesses and services across the United States. Find what you need, when you need it.

Quick Links

  • Copyright Notice
  • Disclaimer
  • Terms and Conditions

Browse by State

  • Alabama
  • Alaska
  • Arizona
  • Arkansas
  • California
  • Colorado

Connect With Us

© 2026 News Directory 3. All rights reserved.

Privacy Policy Terms of Service