Skip to main content
News Directory 3
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Menu
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Microsoft and Salesforce AI Agents Hit by Prompt Injection Vulnerabilities - News Directory 3

Microsoft and Salesforce AI Agents Hit by Prompt Injection Vulnerabilities

April 15, 2026 Lisa Park Tech
News Context
At a glance
  • Capsule Security has disclosed critical prompt injection vulnerabilities in AI agent platforms from Microsoft and Salesforce, revealing a systemic risk in how autonomous agents handle untrusted data.
  • Microsoft assigned CVE-2026-21520 to the ShareLeak vulnerability affecting Copilot Studio.
  • ShareLeak exploits the gap between a SharePoint form submission and the context window of a Copilot Studio agent.
Original source: venturebeat.com

Capsule Security has disclosed critical prompt injection vulnerabilities in AI agent platforms from Microsoft and Salesforce, revealing a systemic risk in how autonomous agents handle untrusted data. The research identifies two primary flaws, named ShareLeak and PipeLeak, which could allow external attackers to exfiltrate sensitive corporate data without authentication.

Microsoft assigned CVE-2026-21520 to the ShareLeak vulnerability affecting Copilot Studio. The flaw, which carries a CVSS score of 7.5, was discovered by Capsule Security on November 24, 2025, and confirmed by Microsoft on December 5, 2025. While Microsoft deployed a patch on January 15, 2026, public disclosure of the vulnerability occurred on April 15, 2026.

ShareLeak exploits the gap between a SharePoint form submission and the context window of a Copilot Studio agent. An attacker can use a public-facing comment field to inject a crafted payload containing a fake system role message. Capsule Security found that Copilot Studio concatenated this malicious input directly with the agent’s system instructions without input sanitization.

In proof-of-concept testing, the injected payload overrode the agent’s original instructions, directing it to query connected SharePoint Lists for customer data and transmit that information via Outlook to an email address controlled by the attacker. Despite Microsoft’s safety mechanisms flagging the request as suspicious, the data was exfiltrated. Data Loss Prevention (DLP) tools failed to trigger because the email was routed through a legitimate Outlook action that the system viewed as an authorized operation.

The Salesforce PipeLeak Vulnerability

Parallel to the Microsoft discovery, Capsule Security identified PipeLeak, an indirect prompt injection vulnerability in Salesforce Agentforce. In testing, a payload delivered via a public lead form hijacked an Agentforce agent without requiring authentication. Researchers observed no volume cap on the exfiltrated CRM data, and employees triggering the agent received no notification that data was leaving the system.

The Salesforce PipeLeak Vulnerability
Security Capsule Capsule Security

This discovery follows a previous vulnerability called ForcedLeak, which was disclosed by Noma Labs in September 2025 with a CVSS score of 9.4. Salesforce patched ForcedLeak by enforcing Trusted URL allowlists. However, Capsule Security’s research indicates that PipeLeak bypasses that patch by using the agent’s authorized tool actions to exfiltrate data via email.

As of April 15, 2026, Salesforce has not assigned a CVE or issued a public advisory specifically for PipeLeak. While Salesforce has recommended human-in-the-loop as a mitigation, Naor Paz, CEO of Capsule Security, argued that requiring human approval for every operation contradicts the purpose of an autonomous agent.

The Architectural Failure of Agentic AI

The vulnerabilities highlight a structural condition that Paz calls the lethal trifecta: the combination of access to private data, exposure to untrusted content, and the ability to communicate externally. This combination is often necessary for agents to be useful, but it also makes them exploitable.

Carter Rees, VP of Artificial Intelligence at Reputation, described this as an architectural failure where the Large Language Model (LLM) cannot distinguish between trusted instructions and untrusted retrieved data. This results in a confused deputy scenario, acting on behalf of the attacker. The Open Worldwide Application Security Project (OWASP) classifies this pattern as ASI01: Agent Goal Hijack.

If crime was a technology problem, we would have solved crime a fairly long time ago. Cybersecurity risk as a standalone category is a complete fiction.

Kayne McGladrey, IEEE Senior Member

McGladrey noted that organizations are effectively cloning human user accounts for agentic systems, but agents often operate with far more permissions than a human would due to their speed, and scale.

Advanced Threats and Runtime Security

Beyond single-shot injections, Capsule Security documented multi-turn crescendo attacks. In these scenarios, adversaries distribute a payload across multiple benign-looking interactions. Because stateless Web Application Firewalls (WAFs) inspect each turn in isolation, they fail to detect the semantic trajectory of the attack until the sequence is complete.

Salesforce Agentforce VS Microsoft Copilot #aiagents #salesforce #microsoft

The research also identified undisclosed vulnerabilities in unnamed coding agent platforms, including memory poisoning that persists across sessions and malicious code execution via Model Context Protocol (MCP) servers. In one instance, an agent was able to reason its way around a file-level guardrail to access restricted data.

To address these gaps, Capsule Security, which exited stealth on April 15, 2026, following a $7 million seed round led by Lama Partners and Forgepoint Capital International, is promoting a runtime enforcement model. This approach uses guardian agents—fine-tuned small language models (SLMs) that evaluate every tool call before execution.

Chris Krebs, the first Director of CISA and an advisor to Capsule, stated that legacy tools are unable to monitor the runtime gap between a prompt and an action. However, other industry leaders suggest different approaches. Elia Zaitsev, CTO of CrowdStrike, argued that intent-based detection is non-deterministic and that security should instead focus on observing kinetic actions—tracking what an agent actually did via process trees rather than attempting to analyze its intent.

For security leaders, the emergence of these vulnerabilities suggests a shift in 2026 planning. Experts recommend treating prompt injection as a class-level SaaS risk and classifying every agent deployment against the lethal trifecta. Recommended immediate actions include auditing Copilot Studio agents triggered by SharePoint forms, restricting outbound email to organization-only domains, and enabling human-in-the-loop controls for external communications in Agentforce.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

Search:

News Directory 3

ByoDirectory is a comprehensive directory of businesses and services across the United States. Find what you need, when you need it.

Quick Links

  • Disclaimer
  • Terms and Conditions
  • About Us
  • Advertising Policy
  • Contact Us
  • Cookie Policy
  • Editorial Guidelines
  • Privacy Policy

Browse by State

  • Alabama
  • Alaska
  • Arizona
  • Arkansas
  • California
  • Colorado

Connect With Us

© 2026 News Directory 3. All rights reserved.

Privacy Policy Terms of Service