AI-Powered Attack Grants Admin Access to AWS Account in Minutes
A cybercriminal leveraged generative AI to gain access to an Amazon Web Services (AWS) account and escalate privileges to administrator level in under ten minutes, according to threat research from Sysdig. The rapid intrusion involved code injection, exploitation of Amazon Bedrock, and attempts to launch costly GPU instances.
The compromised credentials belonged to an Identity and Access Management (IAM) user with read and write permissions on AWS Lambda and restricted access to Amazon Bedrock. According to Sysdig, the attacker discovered the exposed credentials in publicly accessible Amazon S3 buckets containing data for Retrieval-Augmented Generation (RAG) used with AI models. The organization may have created this account to automate Bedrock tasks via Lambda functions.
Code Injection Enables Access to Administrator Privileges
With the IAM user possessing “ReadOnlyAccess” policy permissions, the attacker began enumerating extensive AWS resources, including Secrets Manager, SSM, S3, Lambda, EC2, ECS, RDS, CloudWatch, KMS, and Organizations. They also investigated AI services like Bedrock, OpenSearch Serverless, and SageMaker, listing models, knowledge bases, and inference profiles. Sysdig notes that such broad enumeration by an IAM user should be considered suspicious and better monitored.
After multiple attempts, the attacker successfully created an access key for an administrator user. The injected code listed IAM users, their keys, and associated policies, subsequently generating a new key for the target account. A total of 19 different AWS identities (users and roles) were involved, according to Sysdig’s Threat Research Team (TRT). Some attempts targeted non-existent accounts – a behavior Sysdig attributes to AI-generated “hallucinations.”
Attempts to Launch High-Performance GPU Instances
After verifying that logging was disabled, the attacker invoked several AI models, including Claude, Llama, and Titan. They then attempted to launch EC2 GPU instances of type p5 and subsequently p4d, which cost approximately $32 USD per hour. An initialization script started CUDA, PyTorch, and JupyterLab. References to non-existent GitHub repositories also suggest automated generation, according to the cloud security provider.
This operation highlights the increasing automation of attacks on cloud environments, according to the TRT. Sysdig recommends strictly applying the principle of least privilege, enabling Bedrock logging, and blocking unauthorized instance types using Service Control Policies.
Recent reports corroborate the growing trend of attackers leveraging compromised credentials to exploit cloud resources. , The Hacker News reported on a large-scale cryptocurrency mining campaign powered by compromised IAM credentials in AWS. Research published by Entro Security on , details a new attack vector called “LLMjacking,” where attackers hijack access to cloud-based AI models using stolen credentials, focusing on non-human identities (NHIs) like API keys. This represents distinct from traditional breaches targeting user passwords.
The increasing sophistication of these attacks, coupled with the growing value of cloud-based AI resources, underscores the need for robust security measures. A report from Mitiga Labs, published on , predicts that cybersecurity will become an AI-driven battleground in , with attackers and defenders increasingly relying on artificial intelligence.
The Sysdig report emphasizes the importance of proactive monitoring and robust access controls to mitigate the risk of similar attacks. The speed with which the attacker gained administrative access – just eight minutes from initial access – demonstrates the critical need for rapid detection and response capabilities in cloud environments.
