Home » Tech » China-Linked Hackers Use Google Gemini AI for Cyberattack Planning

China-Linked Hackers Use Google Gemini AI for Cyberattack Planning

by Lisa Park - Tech Editor

A Chinese government-backed hacking group, known as APT31, has been leveraging Google’s AI chatbot, Gemini, to plan and analyze potential cyberattacks against US organizations, according to a report released by Google’s Threat Intelligence Group (GTIG). While no successful intrusions have been confirmed, the activity highlights a growing trend of threat actors experimenting with artificial intelligence to enhance their offensive capabilities.

APT31, also known as Violet Typhoon, Zirconium, and Judgment Panda, has a history of targeting critical US infrastructure. The group was previously sanctioned by the US government and seven of its members were criminally charged in March 2024 for intrusions into computer networks, email accounts, and cloud storage belonging to numerous high-value targets. The recent use of Gemini represents a shift towards more automated and sophisticated attack planning.

According to Google, APT31 employed a structured approach, prompting Gemini with a persona of a cybersecurity expert to automate vulnerability analysis and generate targeted testing plans. This isn’t simply about asking an AI for advice; it’s about using it to streamline the entire reconnaissance phase of an attack.

The group’s activities, which occurred late last year, involved utilizing Gemini in conjunction with Hexstrike, an open-source red-teaming tool. Hexstrike, built on the Model Context Protocol (MCP), allows users to execute over 150 security tools, encompassing network and vulnerability scanning, reconnaissance, and penetration testing. While intended for ethical hackers and bug bounty hunters, Hexstrike has been adopted by malicious actors to accelerate their operations.

Google’s report details how APT31 used Hexstrike and Gemini to automate intelligence gathering, identifying technological vulnerabilities and organizational weaknesses. The combination allowed the group to analyze exploits – including remote code execution, web application firewall (WAF) bypass techniques, and SQL injection – against specific US-based targets. Google has since disabled the accounts associated with this campaign.

John Hultquist, chief analyst at Google Threat Intelligence Group, emphasized the significance of this development. “The adversaries’ adoption of this capability is so significant – it’s the next shoe to drop,” he stated. He outlined two primary concerns: the ability to operate across the entire attack lifecycle with minimal human intervention, and the automation of vulnerability exploitation.

Hultquist explained that this automation allows attackers to move faster than defenders and target a larger number of organizations. The speed advantage is particularly concerning given the often lengthy patch cycles within organizations – the time it takes to develop, test, and deploy security fixes. “It’s a really significant space currently,” Hultquist said. “In some organizations, it takes weeks to put defenses in place.”

This situation necessitates a shift in defensive strategies, with security professionals needing to leverage AI to respond to and remediate vulnerabilities more quickly. “We are going to have to leverage the advantages of AI, and increasingly remove humans from the loop, so that we can respond at machine speed,” Hultquist noted.

Beyond the use of Gemini and Hexstrike, Google’s report also highlighted an increase in “distillation attacks,” also known as model extraction attempts. These attacks aim to steal the intellectual property embedded within AI models, specifically the underlying reasoning and chain-of-thought processes. Google’s GTIG and DeepMind teams have identified numerous attempts to extract information from Google’s AI products.

“This is coming from threat actors throughout the globe,” Hultquist said. “Your model is really valuable IP, and if you can distill the logic behind it, there’s very real potential that you can replicate that technology – which is not inexpensive.” Successful model extraction could allow competitors or malicious actors to accelerate their own AI development at a reduced cost.

Google’s latest report builds upon its previous analysis from November 2025, detailing the broader trend of government-backed groups and cybercriminals abusing Google’s AI tools. The company is actively implementing measures to mitigate these risks, but the evolving landscape demands continuous vigilance, and adaptation. The increasing sophistication of AI-powered attacks underscores the need for a proactive and AI-driven approach to cybersecurity defense.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.