Anthropic’s ‘Ethical AI’ Used in Military Operation Targeting Maduro
- For months, Anthropic has positioned itself in Silicon Valley as the ethical alternative among artificial intelligence apps, compared to ChatGPT and Google Gemini.
- In other words, Anthropic has sold Claude as an AI model specifically designed to be harmless, honest, and, above all, peaceful.
- Its terms of service are clear: no weapons, no mass surveillance, no violence.
For months, Anthropic has positioned itself in Silicon Valley as the ethical alternative among artificial intelligence apps, compared to ChatGPT and Google Gemini. The company, founded by the Amodei brothers, promotes Claude as an AI built under ethical principles and its own “Constitution.”
In other words, Anthropic has sold Claude as an AI model specifically designed to be harmless, honest, and, above all, peaceful.
Its terms of service are clear: no weapons, no mass surveillance, no violence.
But analysts point to a possible contradiction with those terms and conditions: the Wall Street Journal reports that the Pentagon used Claude during the planning and execution of the military raid that captured Venezuelan leader Nicolás Maduro in early January.
The mission to detain Maduro and his wife, which included the bombing of several locations in Caracas last month, runs counter to Anthropic’s stated use policy prohibiting Claude from being used to “facilitate violence, develop weapons, or conduct surveillance.”
This is how Claude processed the intelligence needed for a high-risk extraction operation, despite its ethical restrictions.
The Palantir Factor
How does an AI programmed to reject violent requests end up in a classified military planning operation at the Pentagon? The answer has a name: Palantir Technologies.
According to the report, the Department of Defense did not access Claude through the web interface used to summarize emails or write code. They did so through Palantir’s data analysis platforms, a firm specializing in analysis for defense.
By integrating Claude within Palantir’s classified ecosystem, the Pentagon was able to use the model’s reasoning and data processing capabilities to synthesize terabytes of intelligence in real time.
Claude wasn’t “piloting drones” or “pulling the trigger,” but doing what it does best: finding patterns that, in this case, led to a ground raid and prior bombings that reportedly caused casualties.
Anthropic’s Defense and the Pentagon’s Pressure

The situation leaves Anthropic in a complex contractual tension.
On one hand, its valuation has soared to $380 billion, consolidating itself as an essential tool for U.S. Security. Although Anthropic’s concerns about how Claude may be used by the Pentagon have led Trump administration officials to consider canceling its contract for up to $200 million, according to the Journal.
its founding mission of “safety first” contrasts with Trump’s military operation.
“We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,” a spokesperson for Anthropic stated. “Any use of Claude, whether in the private sector or across government, must comply with our Usage Policies, which govern its deployment.”
We will not employ AI models that do not allow us to win wars
The reality is that Anthropic is today the only provider of LLMs (large language models) available on classified networks through third parties.
The tension is evident. Secretary of Defense Pete Hegseth had warned in January, in a message to Silicon Valley’s ethical doubts: “We will not employ AI models that do not allow us to win wars.”
The implications for big tech seem clear: if they want the government’s multi-billion dollar contracts, ethics cannot be an obstacle.
Neutrality and Challenges of Use in Military Environments

What happened in Venezuela illustrates a reorientation of AI toward military uses. We are seeing how commercial tools, trained with the collective knowledge of the internet and refined to be “safe,” are being directed toward military operations.
For the average user, Claude remains the friendly chatbot that helps with programming or writing essays. But for the world, the WSJ report challenges the concept of “neutral AI.”
The moment the code enters the Pentagon’s secure network, the “Constitution” of the AI is subordinated to the Constitution of the United States. And in that exchange, the ethics of Silicon Valley appears to be the first confirmed casualty.
