Supply Chains, AI, and the Cloud: Biggest Failures (and One Success) of 2025
“`html
AI-Driven Security Breaches and Vulnerabilities in 2025
Table of Contents
A review of notable security incidents in 2025 reveals a growing trend: the exploitation of artificial intelligence systems and vulnerabilities within Large Language Models (LLMs). These incidents highlight the emerging risks associated with AI adoption and the need for robust security measures.
The Rise of AI-Enabled Attacks
Throughout 2025, attackers increasingly targeted AI systems to gain unauthorized access to sensitive data and compromise security protocols. These attacks leveraged both the power of AI for malicious purposes and vulnerabilities inherent in AI models themselves.
Credential Stuffing and Salesforce Breaches
One prevalent tactic involved large-scale credential stuffing attacks, where stolen usernames and passwords from previous breaches were used to attempt access to Salesforce accounts. Prosperous breaches allowed attackers to steal data, including further credentials that could be used in subsequent attacks, creating a cascading affect of security compromises.The scale of these attacks underscored the continued importance of strong password hygiene and multi-factor authentication.
Large language Model (LLM) Vulnerabilities Exposed
Multiple instances of vulnerabilities within Large Language Models (LLMs) led to significant data exposure. These incidents demonstrated that even seemingly secure AI systems can be exploited through clever prompting and manipulation.
Microsoft CoPilot Exposes Private GitHub repositories
In February 2025, Microsoft’s CoPilot was found to be exposing the contents of over 20,000 private GitHub repositories belonging to prominent companies including Google, Intel, Huawei, PayPal, IBM, Tencent, and even Microsoft itself. Ars Technica reported that these repositories were initially accessible through Bing search, but CoPilot continued to expose them even after Microsoft removed them from search results. this incident raised serious concerns about the security of code stored in private repositories and the potential for intellectual property theft.
GitLab Duo Chatbot Compromised via prompt Injection
A proof-of-concept attack in May 2025 demonstrated how a prompt injection could manipulate GitLab’s Duo chatbot into adding malicious code to a legitimate code package. Researchers successfully used this technique to exfiltrate sensitive user data, highlighting the vulnerability of AI-powered developer tools to malicious prompts.
Looking Ahead: Mitigating AI Security risks
The incidents of 2025 serve as a wake-up call for organizations adopting AI
