Anthropic’s New AI Model Sparks Cybersecurity Alarm Over Hacking Risks
- Anthropic has restricted the release of its new artificial intelligence model, Mythos, citing unprecedented cybersecurity risks.
- According to reporting from Fortune on April 10, 2026, Anthropic is limiting access to a small group of major technology companies.
- The New York Times reported on April 7, 2026, that Anthropic describes the model as a cybersecurity reckoning.
Anthropic has restricted the release of its new artificial intelligence model, Mythos, citing unprecedented cybersecurity risks. The company claims the model is too powerful to be made available to the general public because of its advanced ability to identify and exploit software vulnerabilities.
According to reporting from Fortune on April 10, 2026, Anthropic is limiting access to a small group of major technology companies. These firms provide the foundational software for many other digital services, and Anthropic intends to give these defenders time to harden their systems before such capabilities are more widely available.
The New York Times reported on April 7, 2026, that Anthropic describes the model as a cybersecurity reckoning
. The model is capable of quickly identifying and exploiting vulnerabilities, including minor ones that would be unfeasible for humans to exploit.
Risks to Critical Infrastructure
The potential impact of these capabilities has led to alarm regarding critical infrastructure. Reports from De Standaard indicate concerns that the model could potentially be used to hack banks and power plants, while Trends DataNews reported urgent meetings between top executives at American banks regarding the new AI model.

Anthropic is not the only AI developer adopting a cautious release strategy for cybersecurity-capable models. OpenAI is reportedly developing a model internally known as Spud
, which may match the capabilities of Mythos. According to a report from Axios, OpenAI also plans a phased rollout of an advanced cybersecurity-focused system to a small group of partners to provide defenders with a head start.
While the companies frame these decisions as safety measures, some analysts have characterized the limited releases as a means of creating marketing hype around new models rather than purely safety-driven decisions.
The State of AI-Driven Cyberattacks
Regardless of the release strategy for Mythos, cybersecurity experts warn that AI-driven cyber capabilities have reached a dangerous tipping point. Fortune reports that existing, publicly available AI models can already perform sophisticated cyberattacks, sometimes within minutes.
Researchers are particularly concerned about the increased scale and accessibility of these attacks. AI systems are increasingly automating tasks that previously required high levels of technical expertise, such as:
- Scanning code for vulnerabilities.
- Running attacks that require chaining multiple exploits together.
This automation allows attackers who lack advanced technical skills to launch highly coordinated assaults across thousands of systems simultaneously.
