Nurturing agentic AI beyond infancy

by ai-intensify
0 comments
Nurturing agentic AI beyond infancy

Accountability Challenge: It’s Not Them, It’s You

Until now, the focus of governance has been on modeling output risks with humans before making consequential decisions – such as loan approval or job applications. The focus was on model behavior including drift, alignment, data removal and toxicity. The speed was set by a human motivating a model in a chatbot format with lots of back and forth interaction between the machine and the human.

Today, with autonomous agents operating in complex workflows, the vision and benefits of applied AI require significantly fewer humans in the loop. The point is to operate business at machine speed by automating manual tasks with clear architecture and decision rules. The goal from a liability standpoint is no reduction in enterprise or business risk between the machine operating the workflow and the human operating the workflow. CX Today Summarizes The position in short: “AI does the work, the risk lies with humans,” and the California state law (AB 316), effective January 1, 2026, removes the “AI did it; I didn’t approve of it” excuse. This is similar to parenting when an adult is held responsible for a child’s actions that negatively impact the larger community.

The challenge is that without building code that enforces operational governance consistent with different levels of risk and liability along the entire workflow, the benefits of autonomous AI agents are negated. In the past, governance was static and tailored to the typical conversation pace for chatbots. However, autonomous AI by design removes humans from many decisions, which could impact governance.

Consideration of permissions

It’s like handing a three-year-old child a video game console that remotely controls an Abrams tank or an armed drone, leaving a probabilistic system running without real-time guardrails that could alter critical enterprise data that carries significant risks. For example, agents who integrate and chain tasks across multiple corporate systems may have privileges beyond those that would be granted to a human user. To move forward successfully, governance must move beyond policy set by committees to operational code built into the workflow from the beginning.

A humorous meme on children’s behavior with toys begins with all the reasons why every toy you have is mine and ends with a broken toy that is definitely yours. For example, OpenGL provided a user experience close to working with a human assistant; but the enthusiasm changed security specialist Realized that inexperienced users could be easily compromised using it.

For decades, enterprise IT has been living with shadow IT and the reality that skilled technology teams must take over and clean up assets they did not build or install, much like a child returning a broken toy. With autonomous agents, the risks are larger: persistent service account credentials, long-lived API tokens, and permission to make decisions on the core file system. To address this challenge, it is imperative to allocate appropriate IT budget and labor to maintain centralized detection, monitoring, and remediation for thousands of employees or department-built agents.

having a retirement plan

Recently, an acquaintance mentioned that he saved a customer hundreds of thousands of dollars by identifying and then terminating a “zombie project” – a neglected or failed AI pilot that was left running on a GPU cloud instance. There are potentially thousands of agents who risk becoming a zombie fleet inside a business. Today, many executives encourage employees to use AI more — than otherwise — and employees are asked to create their own AI-first workflows or AI assistants. With the utility of something like OpenGL and top-down instructions, it’s easy to predict that there will be an explosion in the number of build-my-own agents coming into the office with their human employee. Since an AI agent is a program that falls under the definition of company-owned IP, those agents may become orphaned as soon as an employee changes departments or companies. Active policy and governance is needed to decommission and retire any agents tied to specific employee IDs and permissions.

Financial optimization is the rule out of the gate

While for some executives, autonomous AI seems to be a way to improve their operating margins by limiting human capital, many are finding that ROI is the wrong approach for human labor replacement. Adding AI capabilities to the enterprise doesn’t have to mean buying a new software tool with predictable instance-per-hour or per-seat pricing. 1st December 2025 IDC survey Data sponsored by Robot shows that 96% of organizations deploying generic AI and 92% of organizations implementing agentic AI reported that costs were higher or much higher than expected.

Related Articles

Leave a Comment