Skip to main content
News Directory 3
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Menu
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World

China AI: US Companies Allege Illegal Model Copying

February 25, 2026 Victoria Sterling -Business Editor Business

The escalating competition in artificial intelligence has taken a new turn, with Anthropic, a leading US AI firm, accusing three Chinese companies – DeepSeek, Minimax and Moonshot AI – of systematically extracting capabilities from its Claude chatbot through what it describes as a large-scale intellectual property theft operation. The allegations, made public on February 24, 2026, center around the creation of over 24,000 fraudulent accounts used to generate more than 16 million interactions with Claude, a process known as “distillation.”

Distillation, while a common practice within the AI industry, involves using a more powerful “teacher” model – in this case, Claude – to train a “student” model. This allows companies to create cheaper, more accessible versions of advanced AI. However, Anthropic explicitly prohibits this practice for its Claude model, and alleges these Chinese firms bypassed those restrictions and its regional access limitations.

The accusations come amid growing concerns in the US about the rapid advancements of Chinese AI capabilities and the potential for these technologies to be used for national security purposes. Anthropic argues that allowing foreign labs, particularly those linked to the Chinese Communist Party, to circumvent export controls through distillation attacks undermines the competitive advantage the US aims to maintain in the AI sector. The company claims this practice enables authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance.

According to Anthropic, Shanghai-based MiniMax was the most prolific actor in this alleged campaign, accounting for over 13 million of the total interactions with Claude. Moonshot AI generated over 3.4 million exchanges, while DeepSeek accounted for approximately 150,000. Anthropic identified the activity through analysis of IP addresses, request metadata, and infrastructure indicators that deviated from typical user traffic. The company states the focus of these interactions was on Claude’s most advanced features, including complex reasoning, coding, and tool use.

This isn’t the first time concerns have been raised about DeepSeek’s rapid progress. Last year, the company launched a powerful AI model that rivaled industry leaders like OpenAI’s ChatGPT, despite requiring significantly fewer computing resources. This raised questions about the effectiveness of US export controls and prompted OpenAI to investigate whether DeepSeek had improperly distilled its ChatGPT models. Earlier this month, OpenAI sent a memo to the US House Select Committee on China outlining its concerns about DeepSeek’s advancements, alleging ongoing efforts to “free-ride” on the capabilities developed by US AI labs.

The core of the dispute lies in the definition of legitimate distillation versus a “distillation attack.” While AI companies routinely distill their own models to create smaller, more affordable versions for customers, Anthropic contends that the scale and method employed by DeepSeek, Minimax, and Moonshot AI constitute a deliberate attempt to illicitly acquire its intellectual property. The company has not, however, indicated any intention to pursue legal action at this time, framing the issue as a matter of violating its terms of service and regional access restrictions.

The implications of these allegations extend beyond the immediate financial impact on Anthropic. The US military reportedly utilized Anthropic’s Claude AI tool in an operation that led to the capture of Venezuelan leader Nicolás Maduro, highlighting the strategic importance of maintaining a technological edge in AI. The concern is that models built through large-scale distillation may lack the safety guardrails embedded in US-developed systems, potentially leading to unintended consequences when deployed in sensitive applications.

The accusations against the Chinese AI firms also underscore the challenges of enforcing intellectual property rights in the rapidly evolving AI landscape. As AI models become increasingly complex and accessible, the risk of unauthorized replication and exploitation is likely to grow. This situation is likely to intensify pressure on policymakers to develop more effective mechanisms for protecting US AI innovation and preventing the transfer of sensitive technologies to potential adversaries.

The rapid advancement of Chinese AI capabilities, even if achieved through methods considered questionable by US firms, is reshaping the global AI landscape. DeepSeek is expected to release a new flagship model imminently, further demonstrating the increasing competitiveness of Chinese AI labs. The situation highlights the need for continued investment in AI research and development, as well as a proactive approach to safeguarding US technological leadership in this critical domain.

Anthropic’s claims are being closely watched by industry observers and policymakers alike, as they raise fundamental questions about the future of AI competition and the balance between innovation and security. The company’s ability to detect and prevent these alleged distillation attacks will be crucial in maintaining its competitive advantage and protecting its intellectual property.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

Search:

News Directory 3

ByoDirectory is a comprehensive directory of businesses and services across the United States. Find what you need, when you need it.

Quick Links

  • Copyright Notice
  • Disclaimer
  • Terms and Conditions

Browse by State

  • Alabama
  • Alaska
  • Arizona
  • Arkansas
  • CA
  • California

Connect With Us

© 2026 News Directory 3. All rights reserved.

Privacy Policy Terms of Service