Securing AI Agents: The Critical Role of Identity and Governance
- The enterprise shift from reactive AI chatbots to autonomous agentic systems is creating a critical security gap as deployment speeds outpace the development of governance frameworks.
- According to reporting from InformationWeek on April 30, 2026, the primary threat facing this transition is non-human identity sprawl.
- Non-human identities are essential for allowing services to interoperate within digital environments, but their growth is now exponential.
The enterprise shift from reactive AI chatbots to autonomous agentic systems is creating a critical security gap as deployment speeds outpace the development of governance frameworks. While organizations are rapidly integrating AI agents into core business processes to accelerate decision-making, the resulting proliferation of non-human identities is introducing significant operational risk.
According to reporting from InformationWeek on April 30, 2026, the primary threat facing this transition is non-human identity sprawl
. Unlike human users, who are managed through structured onboarding and periodic access reviews, non-human identities (NHIs)—which include service accounts, API keys, and OAuth tokens—often operate in the background with elevated permissions and long-term credentials.
The Scale of Non-Human Identity Sprawl
Non-human identities are essential for allowing services to interoperate within digital environments, but their growth is now exponential. Some environments report ratios of NHIs to human users as high as 50:1 or higher, according to data from Okta. Agentic AI accelerates this trend by creating thousands of temporary or persistent identities that authenticate continuously across cloud and SaaS environments.
The drive toward adoption is widespread. A report from Deloitte found that nearly three-quarters of 3,325 leaders surveyed plan to deploy agentic AI within two years. However, the infrastructure used to manage these agents is often an afterthought. In many cases, security controls are only added after the authority to use the system has already been granted.
The danger of this sprawl is amplified by the autonomous nature of agents. When permissions are overly broad or poorly governed, AI agents can amplify weaknesses at machine speed. This increases the risk that sensitive data will be exposed or that workflows will extend beyond their original design assumptions.
The Failure of Traditional IAM
Traditional Identity and Access Management (IAM) programs are designed around the human lifecycle—specifically the joiner-mover-leaver process. These frameworks rely on human managers for approval flows and defined roles that change as a person’s responsibilities evolve. Non-human identities do not follow these patterns.
Because NHIs lack a human manager, they frequently slip through standard governance processes. This leads to the accumulation of orphaned identities
and stale credentials that are rarely rotated and often over-permissioned. This environment provides high-value targets for attackers, as a compromised agent can cause the same damage as a compromised employee account, often operating more quickly and without direct human supervision.
The risk is further compounded by a lack of baseline authentication in some emerging standards. In April 2026, Adversa AI scanned more than 500 Model Context Protocol (MCP) servers and found that nearly 38% lacked authentication entirely. Knostic identified 1,862 internet-accessible MCP servers that had no identity governance controls in place.
Moving Toward Purpose-Bound Governance
To mitigate these risks, security experts suggest shifting from open-ended, persistent access to purpose-bound permissions
. This model ensures that an agent receives access only for the specific duration and scope of a required task, rather than maintaining 24/7 authority.
Nick Nikols of OpenText Cybersecurity outlines a four-part framework for securing agentic AI identities:
- Define: Assigning every agent a unique identifier and establishing tightly scoped, purpose-driven permissions for both human and non-human actors.
- Assess: Establishing clear ownership and ongoing review processes to prevent permission sprawl and orphaned credentials.
- Enforce: Utilizing encryption and persistent policy controls that remain active regardless of how data is accessed.
- Detect: Monitoring access patterns and behavioral changes to identify unusual activity or drift from expected norms.
This approach treats agents as first-class digital citizens with a defined purpose and scope. By integrating these controls, organizations can move away from broad access rights—such as general file access—toward permissions that are specifically tied to a task, purpose, and time limit.
Risk doesn’t disappear, but it becomes more visible and governable, rather than compounding quietly over time until it becomes too significant to easily contain.
Nick Nikols, OpenText Cybersecurity
While NIST is currently formalizing standards for agentic systems, industry analysts warn that organizations cannot afford to wait for finalized frameworks. The ability to scale agentic AI will depend on whether governance evolves in parallel with deployment, ensuring that autonomy does not come at the cost of security control.
