Every zero trust conversation I’ve been in over the past year eventually hits the same wall. Someone says “we need to apply zero trust to our AI agents.” Everyone nods. Then someone asks, “But how do we identify an agent?” And the room goes quiet.
The uncomfortable answer is that most organisations are still trying to bolt agent activity onto human identity systems. That’s not zero trust. That’s a workaround wearing a compliance hat.
The Identity Gap Nobody Wants to Talk About
Zero trust’s core principle is simple: never trust, always verify. Every access request must be authenticated, authorised, and continuously validated. We’ve spent years building this for human identities — SSO, MFA, conditional access, session management.
But AI agents aren’t humans. They don’t type passwords. They don’t respond to MFA prompts. They don’t have a single consistent session. They spin up, run a task, call three APIs, spawn sub-agents, and disappear. Some run continuously. Some are ephemeral. Some operate with delegated permissions from a human. Some operate autonomously.
If your identity model can’t distinguish between these patterns, your zero trust implementation has a hole large enough to drive an autonomous truck through.
Why Human Identity Proxies Fail for Agents
The most common pattern I see in the wild is running AI agents under a human user’s identity. The agent inherits the user’s permissions, operates within the user’s session, and shows up in logs as the user.
This creates three serious problems.
First, you lose auditability. When an agent takes an action using a human’s credentials, your security team can’t distinguish between the human doing something and the agent doing something. Every incident investigation becomes a guessing game.
Second, you get permission sprawl. Human users typically accumulate broad permissions over time. An agent operating under those credentials inherits all of that access, even if it only needs a narrow slice. That violates the principle of least privilege, which is foundational to zero trust.
Third, you break containment. If the agent is compromised — through prompt injection, data poisoning, or a supply chain attack on its tooling — the attacker gets everything the human has. The blast radius is the human’s entire access footprint, not just the agent’s actual scope of work.
The Agent Identity Model That Works
The model that actually works treats agents as first-class identity principals — distinct from humans, distinct from applications, and governed by their own lifecycle.
Microsoft is heading in this direction with Entra Agent ID, which sits alongside human identities and workload identities as a separate identity class. The concept is right: agents need their own identity objects, their own credential management, and their own conditional access policies.
Here’s what a production-grade agent identity model looks like in practice.
Dedicated identity per agent. Every agent instance gets its own identity, not a shared service account, not a human’s credentials. This identity carries metadata about the agent’s purpose, owner, model version, and authorised scope of action.
Scoped, just-in-time permissions. Agents should receive only the permissions they need for a specific task, granted at runtime and revoked when done. No standing access. No broad role assignments. If an agent needs to read a SharePoint list to answer a question, it gets read access to that list for the duration of that task.
Attestation-based trust. Before granting access, verify the agent’s identity, its runtime environment, the model it’s using, and the policy it’s operating under. This is analogous to device health attestation in zero trust for endpoints — except the “device” is a runtime container and the “health” includes model integrity.
Delegated authority with explicit constraints. When a human delegates authority to an agent, the delegation should be explicit, time-bounded, and auditable. “This agent can act on my behalf to schedule meetings for the next 8 hours” is a valid delegation. “This agent has my full Azure AD permissions forever” is not.
Independent audit trail. Every action an agent takes must be logged under the agent’s own identity, with correlation back to the authorising human and the triggering event. When your SOC reviews alerts, they should see exactly what the agent did, why, and who authorised it.
Conditional Access for Agents
One of the most underutilised capabilities in enterprise identity today is conditional access for workload identities. Microsoft Entra already supports this for service principals, and the same patterns extend naturally to agent identities.
Think about what conditional access policies should look like for agents. An agent accessing sensitive data should only be allowed from trusted network locations. An agent flagged with anomalous behaviour by identity protection should have its access revoked in real time. An agent running on an unverified runtime should be blocked entirely.
These aren’t theoretical patterns. This is the same conditional access logic we apply to human users and devices, extended to agent principals. The policy engine doesn’t care whether the identity is a person, a laptop, or an AI agent. It evaluates signals, applies policy, and grants or denies access.
The organisations that get this right will have an agent security posture that actually holds up under audit. The ones that don’t will be explaining to their board why an AI agent with a marketing coordinator’s permissions exfiltrated their customer database.
The Architecture Pattern
The pattern I’m using with enterprise clients looks like this.
An identity layer in Entra ID (or equivalent) manages agent identities as dedicated principals. A policy layer defines what each agent can access, under what conditions, and for how long. An attestation layer verifies agent health, model version, and runtime integrity before granting access. An audit layer captures every agent action with full provenance — who authorised, what triggered, what happened.
These four layers sit between the agent and every resource it touches. No exceptions. No shortcuts. No “we’ll add security later.”
The investment required isn’t massive. Most of the infrastructure already exists in enterprise identity platforms. The gap is recognition — recognising that agents are a new identity class that deserves the same architectural rigour we applied to human identities a decade ago and device identities five years ago.
Where This Is Heading
Agent identity isn’t an edge case anymore. Microsoft’s Entra Agent ID, along with similar moves from other vendors, signals that the industry recognises agents as a permanent fixture in enterprise architecture.
Within the next twelve months, I expect agent identity to become a standard component of zero trust maturity models. Compliance frameworks will start asking how you govern agent access. Cyber insurers will start asking whether your agents have independent identity and audit controls.
The organisations that build agent identity infrastructure now will have a significant advantage when those requirements arrive. The ones that wait will be retrofitting security onto autonomous systems that are already in production. I’ve seen that movie before. It doesn’t end well.
Zero trust for AI starts with identity. But the identity that matters most right now isn’t yours. It’s your agents’.
- AI Agents Keep Filling Gaps With Garbage. There’s One Fix That Actually Works.
- Why Real-World Agent Architecture Needs More Than Just a Model
- How to Evaluate Agent Platforms in 2026 with Identity First in Mind
- Enterprise AI Agents Need Standards Before They Need Scale in 2026
- MCP A2A OpenTelemetry and OAuth Every Architect Must Track in 2026