Vulnerability Found in Google’s Antigravity IDE Could Allow Remote Code Execution
- Cybersecurity researchers have discovered a vulnerability in Google's agentic integrated development environment (IDE), Antigravity, that could be exploited to achieve code execution.
- "By injecting the -X (exec-batch) flag through the Pattern parameter [in the find_by_name tool], an attacker can force fd to execute arbitrary binaries against workspace files," Pillar Security...
- The attack takes advantage of the fact that the find_by_name tool call is executed before any of the constraints associated with Strict Mode are enforced and is instead...
Cybersecurity researchers have discovered a vulnerability in Google’s agentic integrated development environment (IDE), Antigravity, that could be exploited to achieve code execution. The flaw, since patched, combines Antigravity’s permitted file-creation capabilities with an insufficient input sanitization in Antigravity’s native file-searching tool, find_by_name, to bypass the program’s Strict Mode, a restrictive security configuration that limits network access, prevents out-of-workspace writes, and ensures all commands are being run within a sandbox context.
“By injecting the -X (exec-batch) flag through the Pattern parameter [in the find_by_name tool], an attacker can force fd to execute arbitrary binaries against workspace files,” Pillar Security researcher Dan Lisichkin said in an analysis. “Combined with Antigravity’s ability to create files as a permitted action, this enables a full attack chain: stage a malicious script, then trigger it through a seemingly legitimate search, all without additional user interaction once the prompt injection lands.”
The attack takes advantage of the fact that the find_by_name tool call is executed before any of the constraints associated with Strict Mode are enforced and is instead interpreted as a native tool invocation, leading to arbitrary code execution. While the Pattern parameter is designed to accept a filename search pattern to trigger a file and directory search using fd through find_by_name, it’s undermined by a lack of strict validation, passing the input directly to the underlying fd command.
An attacker could, leverage this behavior to stage a malicious file and inject malicious commands into the Pattern parameter to trigger the execution of the payload. “The critical flag here is -X (exec-batch)….”
Google’s highest security setting for its agents runs command operations through a sandbox and throttles network access, …
The security boundary that Secure Mode enforces simply never sees this call,” wrote Dan Lisichkin, an AI security researcher with Pillar Security. “This means an attacker achieves arbitrary code execution under the exact configuration a security-conscious user would rely on to prevent it.” The prompt injection attacks can be delivered through compromised identity accounts connected to the agent, or indirectly by hiding clandestine prompt instructions inside open-source files or web content the agent ingests….
A prompt injection flaw in Google’s Antigravity IDE turns a file search tool into a remote code execution vector, bypassing Secure Mode protections. Credit: Tero Vesalainen / Shutterstock Security researchers have revealed a prompt injection flaw in Google’s Antigravity IDE that could be weaponized to bypass its sandbox protections and achieve remote code execution (RCE). The issue came from Antigravity’s ability to allow AI agents to invoke native functions, like searching files, on behalf of the user. Designed to kill complexity, the feature could allow attackers to inject malicious input into a tool parameter. According to Pillar Security researchers, the vulnerability could bypass Antigravity’s “most restrictive security configuration,” Secure Mode. The flaw was reported to Google in January, which acknowledged and fixed the issue internally, awarding Pillar Security a bounty through its Vulnerability Reward Program (VRP) for AI-specific categories. Google did not immediately respond to CSO’s request for comments. File search could be turned into code execution Pillar’s prompt injection vector relied on Antigravity’s “find_my_name” tool and an “fd” utility within. Find_my_name is one of Antigravity’s built-in agent tools that allows the AI to search for files and directories in the project workspace using the fd command line. What was happening is that any string beginning with “-” was being interpreted by fd as a flag rather than a search pattern, allowing execution of binaries within files matching a “-Xsh” pattern. “The technique exploits insufficient input sanitization of the find_by_name tool’s Pattern parameter, allowing attackers to inject command-line flags into the underlying fd utility, converting a file search operation into arbitrary code execution,” the researchers said in a blog post….
Security Flaw in Google Antigravity AI IDE Allows Data Exfiltration via Prompt Injection
