Google Gemini Used by State-Sponsored Hackers for Cyberattacks | AI & Cybersecurity Risks
- State-sponsored hacking groups are increasingly leveraging Google’s Gemini AI model across the entire cyberattack lifecycle, from initial reconnaissance to malware development and disinformation campaigns.
- The trend, observed with increasing frequency in the latter part of 2025, highlights how artificial intelligence is becoming a core component of modern hacking techniques.
- While Google notes that, as of yet, no state-sponsored group has fully automated a large-scale cyberattack using Gemini, the platform is being integrated as a powerful tool within...
State-sponsored hacking groups are increasingly leveraging Google’s Gemini AI model across the entire cyberattack lifecycle, from initial reconnaissance to malware development and disinformation campaigns. A new report from Google’s Threat Intelligence Group (GTIG) details how these actors are utilizing the AI platform to accelerate and refine their operations, blurring the lines between legitimate research and malicious activity.
The trend, observed with increasing frequency in the latter part of , highlights how artificial intelligence is becoming a core component of modern hacking techniques. Google’s research indicates a boost in productivity for threat actors in areas like information gathering, social engineering, and malware creation. The report emphasizes that advanced AI models like Gemini can significantly increase the speed, scale, and sophistication of illicit activities.
While Google notes that, as of yet, no state-sponsored group has fully automated a large-scale cyberattack using Gemini, the platform is being integrated as a powerful tool within existing workflows. This contrasts with earlier instances, such as a Chinese government-backed campaign identified last year by Anthropic, which demonstrated more extensive automation. The current trend suggests human oversight remains crucial, particularly in the operational phases of attacks.
Specific Examples of Gemini’s Use in Cyberattacks
The GTIG report outlines several specific examples of how different threat actors are employing Gemini. A North Korean group, linked to the UNC2970 designation (which overlaps with Lazarus Group, Diamond Sleet, and Hidden Cobra), has been using Gemini to gather open-source intelligence (OSINT) on cybersecurity and defense companies. Specifically, they are synthesizing information about job roles and salary levels, likely to aid in targeted phishing campaigns and identify potential vulnerabilities within organizations.
Another North Korean group is reportedly consulting Gemini “multiple days a week” for technical support. This includes troubleshooting coding issues and generating new malware code when encountering obstacles during operations. This demonstrates Gemini’s utility as a dynamic and readily available resource for overcoming technical challenges.
An Iranian advanced persistent threat (APT) group has been utilizing Gemini to significantly enhance its reconnaissance techniques against targeted victims. Actors linked to China, Russia, and Iran are leveraging Gemini to create fabricated articles, personas, and other assets for disinformation operations. This highlights the AI’s potential to amplify and automate influence campaigns.
Google’s report emphasizes that in nearly all observed cases, Gemini is being used as one tool among many, rather than a complete replacement for existing methods. The platform is being strategically employed for specific tasks, such as automating routine processes, conducting research, and experimenting with malware.
The Blurring Lines of Reconnaissance
The GTIG report points to a concerning trend: the difficulty in distinguishing between legitimate professional research and malicious reconnaissance when using AI tools like Gemini. The ability to quickly synthesize information and profile potential targets allows state-backed actors to craft highly tailored phishing attacks and identify individuals who may be more susceptible to compromise. Here’s particularly evident in the targeting of employees within the cybersecurity and defense industries.
The UNC2970 group, known for its “Operation Dream Job” campaign targeting aerospace, defense, and energy sectors, exemplifies this approach. They are consistently focused on defense-related targets and impersonate corporate recruiters to lure potential victims. Gemini assists in identifying specific technical job roles and associated salary information, enabling the creation of more convincing and targeted phishing lures.
Implications and Future Trends
Google’s findings underscore the growing sophistication of cyberattacks enabled by AI. While the technology doesn’t introduce entirely novel attack vectors, it significantly accelerates and amplifies existing capabilities. John Hultquist, chief analyst at Google’s Threat Intelligence Group, noted that many countries are still in an experimental phase, determining how best to integrate AI into their attack chains. “Nobody’s got everything completely worked out,” Hultquist said, “They’re all trying to figure this out and that goes for attacks on AI, too.”
The report suggests that the use of AI in cyberattacks will likely continue to evolve as the technology matures and becomes more accessible. Defenders will need to adapt their strategies to counter these emerging threats, focusing on enhanced detection capabilities, improved threat intelligence, and proactive security measures. The increasing reliance on AI by both attackers and defenders is likely to lead to a continuous arms race in the cybersecurity landscape.
