Threat actors are increasingly leveraging artificial intelligence (AI) to accelerate and refine their attacks, according to a new report released by Google’s Threat Intelligence Group (GTIG) on . The report, an update to findings from November 2025, details a surge in AI integration across the attack lifecycle, from initial reconnaissance to malware development and social engineering.
Model Extraction as a Growing Threat
A key focus of the GTIG report is the rise of “model extraction” attacks, also described as “distillation attacks.” These attacks represent a form of intellectual property theft where malicious actors attempt to replicate the functionality of proprietary AI models. Google has detected and mitigated numerous such attempts originating from private sector entities and researchers globally, all violating Google’s terms of service. While the report notes no direct attacks on Google’s most advanced “frontier models” or generative AI products from state-sponsored actors, the frequency of these extraction attempts is a significant concern. Google has taken steps to disrupt this activity, including disabling associated accounts.
This type of attack isn’t limited to Google. The report suggests that businesses with proprietary AI models are likely to face similar threats in the near future. The motivation behind these extractions is to clone proprietary logic, essentially creating a competing model without the substantial investment in research and development.
AI-Powered Phishing and Reconnaissance
Beyond model extraction, GTIG observed a broader trend of AI adoption by threat actors. Large language models (LLMs) are becoming essential tools for technical research, enabling faster and more effective targeting, and crucially, the generation of highly convincing phishing lures. The report highlights the ability of LLMs to create nuanced and personalized phishing messages, making them significantly harder to detect than traditional, often clumsily worded, attempts.
The use of AI extends to reconnaissance activities as well. Threat actors are using AI to gather information more efficiently, identifying potential targets and vulnerabilities with greater speed and accuracy. This accelerated reconnaissance phase allows attackers to move more quickly through the attack chain, reducing the window of opportunity for defenders to respond.
Malware Development Accelerated by AI
The report also indicates that AI is being used to accelerate malware development. While the specifics of this application weren’t detailed, the implication is that AI tools are assisting in tasks such as code generation, vulnerability exploitation, and obfuscation – making malware more potent and harder to analyze. This could involve AI assisting in identifying and exploiting zero-day vulnerabilities, or automating the process of creating polymorphic malware that evades detection by signature-based antivirus systems.
Broader Cybersecurity Landscape
The GTIG report aligns with broader observations of the evolving cybersecurity landscape. Gen Digital’s Q4 2025 Threat Report, released on , highlights the increasing role of AI in facilitating scams, particularly those targeting consumers. Gen Digital reported blocking 1.43 billion attacks in Q4, with a 17.6% increase in their global risk ratio quarter-over-quarter. Their report emphasizes that attacks are increasingly focused on tricking users into taking actions that compromise their security, such as clicking malicious links or downloading fraudulent software.
Cloudflare’s 2025 Year in Review also points to the growing influence of AI in the threat landscape, alongside the emergence of post-quantum cryptography and record-breaking distributed denial-of-service (DDoS) attacks. These trends collectively paint a picture of a rapidly evolving threat environment where AI is both a powerful defensive tool and a potent weapon in the hands of attackers.
Implications for Defenders
The GTIG report underscores the need for proactive defense strategies. The ability to anticipate and thwart AI-enabled threats requires continuous investment in threat intelligence, security controls, and model robustness. Google states it has strengthened both its security controls and its Gemini models against misuse. Defenders must focus on detecting and mitigating model extraction attempts, improving phishing detection capabilities, and enhancing malware analysis techniques. The report serves as a call to action for the cybersecurity community to adapt to the changing threat landscape and develop innovative solutions to counter the growing use of AI by malicious actors.
The increasing sophistication of attacks, fueled by AI, demands a shift from reactive security measures to a more proactive and intelligence-driven approach. Organizations need to invest in tools and expertise that can identify and respond to threats before they cause significant damage. This includes leveraging AI-powered security solutions to automate threat detection and response, as well as fostering collaboration and information sharing within the cybersecurity community.
