Deploying AI Agents Isn’t Your Typical Software Launch – 7 Lessons from the Trenches

by
0 comments
Deploying AI Agents Isn't Your Typical Software Launch – 7 Lessons from the Trenches

Weiquan Lin/Moment via Getty Images

Follow ZDNET: Add us as a favorite source On Google.


ZDNET Highlights

  • Agent deployment is different from traditional software launch.
  • Governance with agents cannot be an afterthought.
  • ‘AgentOps’ now enters the scene.

The excitement about AI agents may seem extreme, but remember: Making these tools productive requires work and planning on the ground. Top-level actions include giving agents freedom, but not too much freedom, as well as rethinking traditional return-on-investment measures.

Also: 3 ways AI agents will make your work unrecognizable in the next few years

According to Kristin Burnham, effective AI development and management requires making informed choices in control, investment, governance and design. Write In MIT Sloan Management Review. Reviewing recent research conducted by Sloan and the Boston Consulting Group, she cites “tensions” of which AI agent developers and proponents need to be aware:

  • Placing too many restrictions on agentic systems limits their effectiveness, while allowing too much freedom can lead to unpredictability.
  • Agent AI forces organizations to rethink how they assess costs, time, and return on investment.
  • Organizations must decide whether to quickly reintroduce agentic AI into existing workflows or take the time to completely redesign those workflows.

Also: Forensic vibers wanted – and 10 other new job roles AI could create

There is agreement across the industry that agents require new ideas beyond what we have become accustomed to in traditional software development. In the process, new lessons are being learned. Industry leaders shared some of their lessons with ZDNET as they move toward an agentic AI future.

1. Governance matters a lot

“Confidence is not accuracy,” said Nick Cale, a principal engineer at Cisco who led a team of agents to provide expert-level technical guidance to more than 100,000 users. Early versions of agents “could respond faithfully but inaccurately, requiring us to invest heavily in grounding responses through retrieval and structured knowledge.”

An important lesson learned, Kale said, was that “governance cannot be reestablished.” “When inspections and policy controls are added late, the system often lacks the architectural hooks to support them, forcing painful disruption or redesign.”

Also: 8 ways to make responsible AI part of your company’s DNA

In the long run, trust grows, Black said. “Once the system is performing well, human scrutiny is reduced. That’s when the scope narrows and unintended autonomy can emerge if boundaries are not clear.”

Black urged AI agent proponents to “provide autonomy in proportion to reversibility, not based on model confidence. There should always be human oversight on irreversible actions in many domains, no matter how confident the system appears.” Observability is also important, Kale said. “Being able to see how a decision was reached matters as much as the decision itself.”

2. Start narrow

With agents, “we started intentionally narrow,” said Tolga Taran, Atomic Gravity’s CEO. “Most of the agents we deploy are limited to a single domain with clear guardrails and measurable outcomes. That could be an engineering co-pilot, an operations assistant, or an agent who synthesizes complex datasets for executives.”

3. Ensure data quality

AI works well when it has quality data, “said Oleg Daniliuk, CEO of Duenex, a marketing agency that created an agent to automate the verification of leads of visitors to its site. “In our example, to understand whether the lead is interesting to us, we need to get as much data as possible, and the most complex is to get the data of social networks, because it is inaccessible to most scrapers. “That’s why we had to implement multiple solutions and get only the public portion of the data.”

Also: No, AI isn’t stealing your tech job — it’s just replacing it

“Data quality is the number one issue,” Tarhan agreed. “Models only perform according to the information they are given.”

4. Start with the problem – not the technology

“Define success in advance,” Tarhan said. “Instrument everything. Keep humans in the loop longer than necessary. And invest early in observability and governance. When done right, AI agents can be transformative. When rushed, they become expensive demos. The difference is discipline.” Tarhan’s team makes sure to treat agents with roadmaps, feedback loops, and constant iteration — and “not as science experiments.”

5. Consider ‘AgentOps’ approaches

“AI agents do not succeed based on model ability alone,” said Martin Buffi, principal research director at Info-Tech Research Group. His team designed and developed AI agent systems for enterprise-level tasks including financial analysis, compliance verification, and document processing. What helped make these projects successful was the employment of “AgentOps” (agent operations), which focuses on managing the entire agent lifecycle.

5. Keep agents focused

Instead of creating a single agent to do everything, Buffi recommends “employing multiple specialized agents for tasks such as analysis, verification, routing, or communications.” Furthermore, Buffi’s team tried to mirror these agent teams how human teams work, “through clear orchestration patterns hub-and-spoke for parallel work, or sequential pipelines where intent and confidence had to be established before deep action could take place.”

7. Keep context in mind, and stay adaptable

Even for a relatively limited single-user agent, “context management is a significant hurdle and can lead to major problems if not handled correctly,” said Shawn Faulkner, head of AI at Confluent, reflecting on the personal agent he created. “As agents loop through tools and iterate interactions, the context window fills rapidly. While older data points may lose relevance, models do not always prioritize the correct information.”

Also: Can You Become an AI Data Trainer? How to prepare and what is it worth

To maintain high-quality and consistent output, developers “spend a lot of time optimizing the way it sorts, summarizes, and injects context so that the agent doesn’t lose track of the original intent,” Falconer explained. “Engineer for adaptability from day one.” Make sure your AI investments are flexible and appropriately abstracted. Avoid vendor or model lock-in so you can change quickly when the next wave of innovation arrives.”

Related Articles

Leave a Comment