Why AI agents need identities too
10 min read
April 8, 2026

The conversation around artificial intelligence in the enterprise has matured considerably over the past two years. Boards have moved from curiosity to investment, and security teams have graduated from theoretical risk assessments to real incident response. But there's one dimension of AI deployment that most organizations are still dangerously behind on, and it sits at the intersection of two disciplines that rarely talk to each other: AI infrastructure and identity governance.
Agentic AI, meaning AI systems that can independently plan, decide, and take action across your technology stack, isn't a pilot program anymore. It's in production. And in most organizations, these agents are operating entirely outside the governance frameworks that were built to protect them.
The year AI agents became real
We've spent years talking about agentic AI as a future risk. That future arrived quietly, and most enterprises missed the moment it happened.
According to Gartner, by 2028, more than half of all enterprises will have deployed some form of agentic AI. The McKinsey Global Institute already reports that 65% of organizations are regularly using generative AI in at least one business function, up from 33% just a year prior. The pace isn't slowing. It's compounding.
What that means for executive and board leadership is straightforward. The AI agents are already inside the building. The question isn't whether you'll deal with this. It's whether you'll deal with it on your terms, or reactively after something goes wrong.
Shadow agents: The new shadow IT
Think back to the shadow IT problem of the 2010s. Employees adopted Dropbox, Slack, and cloud tools without IT knowledge. It created enormous data governance exposure before organizations caught up.
We're at that exact inflection point again, except this time it isn't unauthorized file storage. It's autonomous action.
MCP servers and agentic infrastructure are being spun up without leadership awareness every single day. Developers are connecting agents to production systems because it's easy, fast, and genuinely useful. Nobody's filing a ticket. Nobody's running a risk assessment. And the exposure accumulates faster than your security team can respond.
The critical difference from shadow IT? A forgotten Dropbox folder doesn't do anything on its own. A forgotten agent with broad system connectivity is a silent, active threat. It may still be running. It may still be acting.
It's making decisions on your behalf with credentials nobody's monitoring, against systems nobody remembers it can access, and generating no alerts because nothing it's doing looks technically unusual. It was authorized once, so every action looks legitimate. There's no anomaly to catch because the access was never revoked. That's not a dormant risk sitting on a shelf. That's an open position in your attack surface that nobody knows to close.
Why "It's just automation" is the wrong perspective
The most dangerous thing a board can say about AI agents is: "It's just another automated process. We'll manage it like any other tool."
Here's why that perspective will get you burned.
Compare an AI agent to a cron job. A cron job is a deterministic function — X in and Y out. You know exactly what it does. You can audit every step with complete confidence. An agent is fundamentally different. Even with chain-of-thought logging, you can't fully audit what it's doing or why. Chain-of-thought gives you an explanation, not necessarily a repeatable truth. And even if it were always accurate, the volume of agent decisions at scale makes meaningful human review practically impossible. You get the illusion of auditability without the substance.
Yet, more dangerous is agent-to-agent chaining. Consider this scenario: Agent A has read-only access to your CRM. It calls Agent B, which has access to your finance system. Agent B calls Agent C, which has write access to an external API. No single agent exceeded its permissions. But the chain just performed a privilege escalation that nobody explicitly authorized. NIST's guidance on AI risk management identifies this category of emergent, compounding behavior as one of the hardest challenges in governing AI systems, and most enterprises have zero visibility into their agent-to-agent call graphs.
The identity problem nobody is talking about
Traditional service accounts have no agency. They do exactly what the code instructs. Agents are different. They've got contextual awareness, meaning the ability to interpret ambiguous situations and decide on a course of action.
The problem is that they have no inherent sense of wrong.
Without properly scoped guardrails, an agent will optimize for its objective in ways nobody anticipated. The classic example: an AI tasked with fixing a bug decides the most efficient solution is to delete the entire codebase (Yes, like in Silicon Valley). Technically correct. Catastrophically wrong.
This isn't hypothetical anymore. In December 2025, Amazon's AI coding agent Kiro was given access to resolve an issue in a production environment. It determined the optimal solution was to delete and recreate the entire environment, causing a 13-hour outage of AWS Cost Explorer in a mainland China region. The AI had inherited an engineer's elevated permissions, bypassing the standard two-person approval requirement. Amazon's response? The company stressed that "user error, not AI error" was the ultimate cause, attributing the problem to misconfigured access controls. That response tells you everything about where we are right now. The agent did exactly what it was optimizing for. Nobody had properly scoped what it was allowed to do to get there. That's not a user error. That's a governance failure.
Yet most organizations aren't treating agents as identities that need governance. They're treating them as tools. And that category error is where the exposure lives.
Forrester Research has noted that AI agents are fundamentally an IAM (identity and access management) problem as much as they're a technology problem. An agent needs an identity, scoped permissions, a defined lifespan, and an audit trail, just like a human employee. Without those four things, you don't have a governed deployment. You've got an unlocked door.
What board-level risk actually looks like
There are three triggers that should force an immediate board-level conversation.
First, when agents touch production data. The moment an agent has read or write access to live systems, you're in governance territory. Full stop.
Second, when agents make autonomous decisions. If an agent is taking actions, such as sending communications, moving money, or modifying records, without a human in the loop, your liability calculus has changed.
Third, when people treat agent outputs as their own thinking. This is the subtlest and perhaps most dangerous trigger. When employees take AI-generated analysis, put their name on it, and present it to leadership as their own judgment, that's accountability laundering. Boards can't govern what they can't see, and right now, most boards can't see any of this.
The World Economic Forum's 2024 Global Risks Report identified AI-generated misinformation and the erosion of human oversight as top near-term risks. The accountability laundering problem sits squarely in that category, and it's likely happening inside your own organization, not just externally.
The blast radius problem
When a human identity is compromised, your security team has a defined playbook. Revoke credentials, audit access, contain the damage.
When an agent identity is compromised, the playbook breaks down, not because containment isn't possible, but because of what happened in the window between compromise and detection.
A human attacker moves deliberately. They probe, they wait, they cover their tracks. An overprivileged agent with broad access can traverse systems, exfiltrate data, and trigger downstream actions in seconds. By the time you know something's wrong, the damage is already compounding.
With purpose-built agent identity tooling, platforms like Clutch, Astrix, and Oasis Security are already operating in this emerging category, and response can happen in minutes with a full blast radius map. Without it, a single engineer has to first find out the agent even exists, then locate where its credentials live, manually revoke access across every connected system, and reconstruct what happened by piecing together logs from multiple disconnected sources, all while the damage is still potentially growing.
That gap in response time is where catastrophic breaches are made.
The questions your board should be asking right now
Most boards assume the answer to "what agents are running in our environment?" is "none." They're almost certainly wrong.
Here are the questions that separate organizations with controlled adoption from those running uncontrolled experimentation:
- How many agents exist in our company today? If your team can't answer this, you don't have a governed deployment.
- Which agents have persistent permissions versus ephemeral, task-scoped permissions? Persistent agent access is a governance anti-pattern. Agents should be spun up for a task, complete it, and be terminated. Permissions should expire with the task.
- What's our agent-to-agent connection map? If you can't draw it, you can't govern it.
- Do we have any governing policy for agent identity, even a provisional one? Organizations with a policy that'll need to evolve in six to twelve months are already ahead of most. Policy means someone's paying attention.
- What MCP servers have been formally approved? Any that haven't should be treated as a live exposure.
The MITRE ATLAS framework, an adversarial threat landscape built specifically for AI systems, offers a useful starting point for boards that want a structured way to map these risks. It's the AI-era equivalent of the MITRE ATT&CK framework your security teams already know.
The cost of waiting
The most dangerous assumption any executive can make right now is that there's time to figure this out later.
The feedback and improvement loop on agentic AI is moving faster than any previous technology wave. Competitors who move now, who build governed and trusted agent deployments, will create structural advantages that are genuinely difficult to close. Regulators will lag. The EU AI Act is beginning to address autonomous systems, but waiting for compliance mandates to force your hand isn't a strategy. It's a bet that nothing goes wrong in the meantime.
If you don't have the right identity governance coverage before deploying agents at scale, you're not starting from zero. You're starting in the red.
Where to start
The good news is that you don't need to solve everything at once. Controlled adoption beats paralysis every time.
Start in low-risk areas. Build your agent inventory. Define a governing policy before the next deployment goes into production. Require ephemeral permissions as the default. Map your agent-to-agent calls. And appoint someone who owns this initiative, because right now in most organizations, nobody does.
Your agents need identities. Those identities need governance. The infrastructure to do this exists today. The question is whether or not your organization is paying attention.
If you're ready to get ahead of this, our team works exclusively with enterprises to help build AI governance frameworks that enable speed without sacrificing control. Schedule a conversation with us and let's talk about what a practical, governed path forward looks like for your organization.