Home » Tech » AI-Powered Cyberattacks Rise: North Korea, Iran, China & Russia Lead the Way

AI-Powered Cyberattacks Rise: North Korea, Iran, China & Russia Lead the Way

by Lisa Park - Tech Editor

Cybercriminals and state-sponsored hacking groups are increasingly leveraging artificial intelligence to enhance their attacks. Google has documented thousands of attempts to clone AI models, as well as new malware families that misuse AI APIs for malicious purposes. Groups originating from North Korea, Iran, China, and Russia are particularly active in this trend.

The Google Threat Intelligence Group (GTIG) reports a significant increase in the misuse of artificial intelligence by cybercriminals and state-backed hackers. In the fourth quarter of , threat actors increasingly integrated AI tools into their attack chains to accelerate reconnaissance, social engineering, and malware development. Adversaries from North Korea, Iran, China, and Russia are now systematically using Large Language Models, such as Gemini, for technical research, target identification, and the creation of compelling phishing lures. Google states it has disabled the accounts and projects associated with these activities and has strengthened its defenses accordingly.

Cloned AI Models

A new phenomenon is the emergence of “Distillation Attacks,” where attackers attempt to clone proprietary AI models. GTIG and Google DeepMind identified and disrupted numerous attempts to systematically extract information from AI models via legitimate API access. In one documented campaign, Google recorded over 100,000 prompts aimed at replicating Gemini’s reasoning abilities across various languages. This form of intellectual property theft violates Google’s terms of service. These attacks primarily target model developers and service providers, rather than end-users.

Several Advanced Persistent Threat (APT) groups are now utilizing AI models throughout the entire attack lifecycle. The Iranian group APT42 is employing Gemini to create detailed profiles of targets and generate culturally tailored phishing messages. The North Korean group UNC2970 is using AI-assisted research to gather information about cybersecurity and defense companies, building credible recruiter personas. Notably, the Chinese APT31 group is experimenting with “Agentic AI,” simulating penetration tests by equipping Gemini with a cybersecurity expert persona and directing it to identify vulnerabilities in U.S. Systems.

Fileless Malware via AI

In the realm of malware development, GTIG observed the new malware family HONESTCUE, which leverages Gemini’s API to generate C# code for second-stage malware. This approach enables fileless execution directly in memory, complicating detection by traditional security systems. COINBAIT, a phishing kit reportedly created with the Lovable AI platform and masquerading as a cryptocurrency exchange, was also documented. Cybercriminals are abusing the public sharing function of AI services to spread malware through “ClickFix” techniques, exploiting the trust associated with well-known AI platforms.

Google is responding to these developments with multi-layered protective measures. The company systematically disables accounts and projects linked to threat actors and continuously improves its classifiers and models. Google is implementing real-time defensive measures that can degrade the performance of “Student Models” during extraction attempts. The insights gained are being incorporated into the ongoing evolution of its security measures. While GTIG has not yet observed groundbreaking capabilities that fundamentally alter the threat landscape, the documented proof-of-concept developments demonstrate the potential evolution of AI-powered attacks.

The increasing sophistication of these attacks highlights a critical shift in the cybersecurity landscape. The accessibility of powerful AI tools is lowering the barrier to entry for malicious actors, allowing them to automate and scale their operations with greater efficiency. The use of AI for reconnaissance and social engineering is particularly concerning, as it enables attackers to craft highly targeted and persuasive attacks that are more likely to succeed.

The development of fileless malware, facilitated by AI code generation, presents a significant challenge to traditional security solutions. Fileless malware operates entirely in memory, leaving no trace on the disk, making it difficult to detect and remove. This technique requires advanced behavioral analysis and threat detection capabilities to effectively counter.

Google’s response, including the disabling of malicious accounts and the improvement of AI model defenses, is a crucial step in mitigating these threats. However, a comprehensive approach to AI-powered cybersecurity requires collaboration between technology companies, governments, and security researchers. Continuous monitoring, threat intelligence sharing, and the development of robust security measures are essential to stay ahead of evolving threats.

The situation underscores the dual-edged nature of AI. While offering immense potential for innovation and progress, it also presents new risks that must be addressed proactively. As AI technology continues to advance, the cybersecurity community must adapt and develop new strategies to defend against increasingly sophisticated attacks.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.