Enterprises must prioritize governance amid agentic AI boom

by
0 comments
Enterprises must prioritize governance amid agentic AI boom

Businesses should consider AI agents as a formal digital identity as 2026 is set to be the year of the agent workforce, amid concerns that most companies are less prepared for the security and governance risks of the technology.

Greg Callegari, managing director of identity security at Accenture, said so recently during a webinar discussion with Harish Perry, senior vice president and general manager of AI security at identity management firm Okta.

Most organizations – 91% – are already using AI agents, but only 10% feel they have an effective governance strategy for them, according to okta.

Similarly, Accenture’s State of Cybersecurity Resilience 2025 The research found that 90% of organizations lack a clear strategy for managing AI-related threats, while 91% are already using AI agents in some capacity.

Autonomous systems are seeing a rise in business workflows, from writing documents and scheduling meetings to more advanced tasks like software development. With this rapid proliferation, sustainable and measured deployment is critical. Without this, Perry warned that agentic AI deployment could create a new form of identity proliferation.

“In 2026, you will have tens, if not hundreds, of AI agents working on your behalf in your workforce,” he said. “The problem is really simple: All of these agents need access to your systems. Without access, they’re useless. And that’s why the question of agent identity, and what an agent can access, becomes the key to everything.”

Connected:VoiceRun raises $5.5M for full-stack voice AI platform

Agent Identification

Unlike traditional chatbots, modern agents have the power to directly interact with and control enterprise systems while performing tasks previously reserved for human workers. To increase transparency and accountability in monitoring agent actions, Callegari argued that they should be treated as individual entities, not unlike human workers.

At its core, the challenge is familiar: Companies need to manage authentication, authorization, and access controls to keep track of technology at scale.

“If you remove all the noise, it’s really an open authorization problem,” Callegari said. “It’s a machine talking to a resource. The question is: should it be allowed to be there, who grants it access, for how long and who revokes it?”

The scale and pace of technology development is exacerbating this issue. In many cases, engineers are encouraged to prioritize speed over governance, resulting in large numbers of unmanaged non-human identities in enterprise environments.

“Agents are acting like employees, performing tasks that humans do,” Callegari said. “So, the way to secure them is to manage them as identities.”

Connected:Human End raises $480 million to build human-centric AI tools

In this light, Callegari said agents should be incorporated, governed and monitored just like human employees, with defined identities and lifecycle management.

“Agents need to identify themselves,” he said. “Once you accept this, everything else flows – access control, governance, auditing and compliance.”

Better defined standards and governance models were also highlighted as an important consideration for companies looking to adopt agentic AI. Having these models in place before opening the door to large-scale deployment is critical to long-term feasibility, speakers said.

The matter is also expected to be considered at the regulatory level, with compliance regimes planned in the US and EU that will require greater transparency and accountability for agents.

While the future of agentic AI is generally considered exciting and opportunity-rich, the message from security leaders like Callegari and Perry is clear. Without adequate governance and recognition structures, innovation in agentic AI could turn from AI’s greatest productivity boost to its greatest risk.

Related Articles

Leave a Comment