To whom do autonomous agents answer? identity and governance problem

by
0 comments
To whom do autonomous agents answer? identity and governance problem

Author(s): Manny Arora

Originally published on Towards AI.

This is Part 2 of a two-part series on Agent Identity.
Part – 1: Identity Management for Agentic AI: Making Authentication and Authorization Digestible
https://medium.com/towards-artificial-intelligence/identity-management-for-agentic-ai-making-authentication-authorization-digestible-0fc5bb212862

TL;DR

  • Agent AI challenges traditional IAM by introducing autonomous actors that can act on behalf of humans or organizations.
  • Identity is no longer a single ID, it is rich metadata, context and trust.
  • Delegation, recurring access, and dynamic device discovery introduce new security and governance complexities.
  • AI-assisted, risk-based governance will be required to prevent consent fatigue and barriers to scalable human oversight.

Introduction

Today’s IAM systems recognize that humans are at the center: log in, provide consent, and perform tasks. But agentic AI – autonomous systems capable of planning, acting and interacting – shatters this assumption completely.

Inspired by the recent whitepaper on Agentic AI Identity Management (arXiv:2510.25819), it is clear that identity for agents is much more than a username or token. Delegation, dynamic device access and increasingly autonomous behavior force us to rethink authentication, authorization and governance.

This article explores the key challenges in building secure, scalable, and reliable agent detection systems.

1. Agent Identification: Beyond a Simple ID

Traditional identity is simple: an identifier that represents an individual. For agents, identities should contain context-rich metadata, including:

Without rich metadata, we cannot reason about risk or apply nuanced policies to autonomous agents.

Current IAM systems are largely static and human-centric, making them unsuitable for handling dynamic, autonomous agents. Future identity models should:

  • Extensible – supporting new features as agent capabilities evolve
  • Verifiable – cryptographically provable and auditable
  • Machine-readable – enabling automated policy evaluation and enforcement

2. Delegated Authority and Convertible Trust

Agents rarely act alone. To work on behalf of humans or other agents, they require delegated access, which introduces complex trust relationships.

2.1 (OBO) On behalf of the delegation

OBO delegation allows an agent to perform tasks for the user. Unlike humans, agent delegation can be continuous and automated, leading to questions:

  • How long should an agent maintain access?
  • How can we prevent going beyond the intended scope?
User → grants delegation → Agent A → acts on downstream APIs

2.2 Recursive delegation

Recursive delegation occurs when an agent delegates access to another agent, who may delegate further. Each hop increases the risk:

User → Agent A → Agent B → Agent C → ...

Major Concerns:

  • Policies must be propagated recursively across all delegation hops
  • Risk assessment must consider variable trust.
  • Accountability becomes difficult to trace

2.3 Repeal challenge

Repeal is no longer a single action. With multiple layers of delegation:

  • Revocation must be broadcast in real time between all dependent agents
  • Failure to properly revoke access can lead to increased access and systemic vulnerabilities

2.4 Deprovisioning and offboarding

Agents may be short-lived, cloned, or moved throughout the system. Provisioning should ensure:

The lifecycle of an agent is complex and dynamic; Provisioning cannot be treated like simple human account deletion.

3. Registries and Dynamic Tool Discovery

Unlike humans, agents will dynamically discover and connect to new services and devices:

  • Self-provision SaaS applications, APIs, or cloud resources
  • Automatic negotiation of capabilities and access

This dynamic creates trust challenges:

IAM is no longer a static permissions model – it has become a living, adaptive ecosystem.

4. Scalable Human Governance

As autonomous agents proliferate, human supervision faces a hurdle:

Future governance will require AI-supported inspections, including:

  • ✅ Risk-based auto-approval system – low-risk actions move forward automatically
  • ✅ Adaptive consent policies – policies that evolve based on agent behavior and context
  • ✅ Interpretable Audit Trails – Human-interpretable logs for accountability

Humans must remain in control, but AI must scale governance to match the autonomy of the agent.

conclusion

Agent identity is the foundation of a trustworthy AI ecosystem. From rich metadata to recursive delegation, dynamic device access and scalable governance, the challenges are both immediate and profound.

To address them IAM needs to be rethought from the ground up:

  1. Machine-readable, high-dimensional identification features
  2. Variable trust and delegation-aware policies
  3. Continuous discovery and verification of agent-accessible resources
  4. AI-assisted governance to prevent consent fatigue

The future isn’t just about building smart agents – it’s about building agents we can trust.

resources

Published via Towards AI

Related Articles

Leave a Comment