Skip to main content
News Directory 3
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Menu
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Google Gemini AI: Model Cloning Attempts & IP Theft Concerns - News Directory 3

Google Gemini AI: Model Cloning Attempts & IP Theft Concerns

February 16, 2026 Lisa Park Tech
News Context
At a glance
  • Google is facing a wave of attempts to replicate its Gemini AI model through a technique known as “model extraction,” or “distillation,” the company announced on Thursday.
  • In one particularly aggressive instance, Google detected an adversarial session where attackers prompted Gemini over 100,000 times, utilizing a variety of non-English languages.
  • Google acknowledges a history of similar tactics, even admitting to past accusations of employing them itself.
Original source: arstechnica.com

Google is facing a wave of attempts to replicate its Gemini AI model through a technique known as “model extraction,” or “distillation,” the company announced on Thursday. These efforts, largely driven by commercial entities and researchers, involve repeatedly prompting Gemini to generate outputs, which are then used to train a smaller, competing model.

In one particularly aggressive instance, Google detected an adversarial session where attackers prompted Gemini over 100,000 times, utilizing a variety of non-English languages. The intent, according to Google, was to amass enough data to create a cheaper, functionally similar AI.

This isn’t a new phenomenon. Google acknowledges a history of similar tactics, even admitting to past accusations of employing them itself. In 2023, Google’s Bard team was reportedly accused of using outputs from OpenAI’s ChatGPT to train its own chatbot. Jacob Devlin, a senior AI researcher at Google and creator of the BERT language model, reportedly raised concerns about violating OpenAI’s terms of service before resigning to join OpenAI. Google denied the claims, but reportedly ceased using the data in question.

How Model Distillation Works

The process of replicating a model’s capabilities through repeated prompting is commonly referred to as “distillation” within the AI industry. The core idea is to leverage a large, pre-trained model – in this case, Gemini – as a “teacher” to train a smaller, more efficient “student” model. Instead of undertaking the massive computational expense of training a large language model (LLM) from scratch, developers can effectively shortcut the process by learning from an existing one.

The appeal is clear: building an LLM like Gemini requires immense resources – billions of dollars and years of dedicated engineering effort. Distillation offers a potentially faster and more cost-effective route to achieving comparable functionality, albeit with likely limitations in performance, and scope. The resulting “student” model won’t be a perfect clone, but it can approximate the “teacher’s” behavior for specific tasks.

A Growing Threat Landscape

Google’s report, part of its ongoing AI Threat Tracker, highlights a broader trend of increasing integration of AI within the threat landscape. According to the report, threat actors are increasingly using AI to accelerate various stages of the attack lifecycle, including reconnaissance, social engineering, and malware development. This update builds upon findings from November 2025 regarding the growing use of AI tools by malicious actors.

While Google has not yet observed direct attacks on its “frontier models” – its most advanced AI systems – or generative AI products from advanced persistent threat (APT) groups, the frequency of model extraction attempts is raising concerns. The company states it has been actively detecting, disrupting, and mitigating these attacks, which originate from entities around the globe.

The motivation behind these extraction attempts appears to be primarily commercial, with private companies and researchers seeking a competitive advantage. However, the report also notes that large language models are becoming essential tools for government-backed threat actors, aiding in technical research, targeting, and the creation of sophisticated phishing campaigns.

Intellectual Property and the Ethics of AI Training

Google frames model extraction as a form of intellectual property theft, citing violations of its terms of service. However, the company’s position is somewhat complicated by its own history of training its LLMs on vast datasets scraped from the internet, often without explicit permission from the content creators. This raises questions about the broader ethics of AI training data and the boundaries of intellectual property in the age of large language models.

The incident underscores the challenges of protecting proprietary AI models in an era where the underlying technology is becoming increasingly accessible. While Google is actively working to defend its intellectual property, the cat-and-mouse game between AI developers and those seeking to replicate their work is likely to continue. The company’s ongoing monitoring and mitigation efforts will be crucial in navigating this evolving threat landscape, and the incident serves as a reminder of the growing importance of AI security and intellectual property protection.

February 12, 2026, Google Threat Intelligence Group (GTIG) published a report detailing the increase in model extraction attempts and the integration of AI for adversarial use.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

Search:

News Directory 3

ByoDirectory is a comprehensive directory of businesses and services across the United States. Find what you need, when you need it.

Quick Links

  • Disclaimer
  • Terms and Conditions
  • About Us
  • Advertising Policy
  • Contact Us
  • Cookie Policy
  • Editorial Guidelines
  • Privacy Policy

Browse by State

  • Alabama
  • Alaska
  • Arizona
  • Arkansas
  • California
  • Colorado

Connect With Us

© 2026 News Directory 3. All rights reserved.

Privacy Policy Terms of Service