Detecting and Avoiding Rot in Your Agent AI – O’Reilly

by ai-intensify
0 comments
Detecting and Avoiding Rot in Your Agent AI - O'Reilly

The following article was originally published on Q McCallum’s Blog And is being republished here with the permission of the author.

Generative AI agents and rogue traders pose similar insider threats to their employers.

In particular, we can expect companies to deploy agentic AI with broad reach and inadequate oversight. This creates the conditions for a particular flavor of a long-running problem, which in turn creates a new risk exposure for the companies concerned and anyone doing business with them. Bots and rogue traders are capable of causing major, sometimes existential, damage to the firms that employ them.

The main difference is the scope: rogue traders work in investment banks, while agentic AI will be deployed across a wide range of companies and industry verticals. Therefore agent AI can cause a greater number of problems than rogue traders and put larger amounts of capital at risk.

I’m naming this risk exposure ROT – rogue operator threat – and this document is a brief explanation of what it is and how to address it.

(I almost called it RAT with an A for “agent”, but then realized it would apply to any type of automated system. So I expanded the scope to “operator”.)

To set the stage, let’s take a trip to the trading floor:

understanding the rogue trader

Rogue trader scams follow the same story:

  • A businessman suffers losses due to bad business.
  • They hide those losses while making new trades in an attempt to recover.
  • Even in new business one has to lose money, thereby digging an even deeper hole.
  • Repeat.

This cycle continues until they are caught, at which point the bank suffers large losses (sometimes in the billions of dollars) and the trader faces legal consequences.

The story of Barings Bank provides a concrete example. Trader Nick Leeson had been logging fraudulent trades for three years, in an attempt to cover his mounting losses. This only came to light when the Kobe earthquake shifted markets to their most recent positions and it became no longer possible to hide the losses. Leeson’s £800M ($1.3B) hole pushed Barings into bankruptcy after just three days.

This will make you ask: how could a professional trading operation let so many bad trades go unrecognized? How can a trader falsify records? Aren’t trading floors full of high-tech operations, electronic audit trails?

And the answer is: it’s complicated.

Yes, keep trading operation records. But no system is perfect. Every time a rogue trading scam comes to light, it turns out that there were lapses in risk controls. A sufficiently motivated trader – particularly one desperate to hide their mistakes – found these loopholes and took advantage of them, continuing their losing streak until they could bring in real money to backfill the fake records.

However, that “unless” never happened. That is why their employers then faced financial, reputational and sometimes legal troubles.

ROT threat of AI agent

Similar to a businessman, an AI agent works on behalf of its core business and is given the space to work independently to complete its tasks.

The risk is that, in the rush to deploy agentic AI, these companies will give bots more leeway than necessary. We have already seen cases in which bots have been able to do this delete email And Delete a production database. And there are no doubt other stories that haven’t made the news either.

At least those issues were caught in real time. Companies experiencing ROT face additional long-term problems in which the bot is capable of incurring losses or causing greater harm in the long term. In those cases problems will only be discovered accidentally and/or when it is too late.

For example, consider an agent who creates false data records to reflect (nonexistent) sales orders. This is likely to continue until some external event, such as investor due diligence or a budget review, forces someone to double-check those records against reality.

Avoiding Rot: Minimizing the Danger

How can you reduce your downside risk exposure to ROT? Preventive measures are key. Strong risk controls, narrow scope of authority, and monitoring can catch rogue operator problems long before they turn into an existential threat.

In light of rogue trader scams, trading outlets have been known to tighten risk controls and separate duties to create a system of checks and balances. (This prevents traders from logging their own fake trades.) Companies also require traders to take time off, as fraudulent activity may surface when the culprit is not around every day to keep the system running.

By adopting these ideas into agentic AI, a company can monitor and limit the scope of a bot’s activity (e.g., placing more than 10 orders an hour requires human approval). It can also periodically purge the agent’s memory so it doesn’t accumulate too many evolved behaviors, or swap out an entirely new bot to pick up where the previous one left off. And as per my usual refrain”Never let bots run unattended“This company may employ people to cross-check everything the bot does. Trust, but verify.

This will not prevent the AI ​​agent from making mistakes. But guardrails and adequate frequent checks should limit the scope of the bot’s damage. Like the rogue trader, the ROT problem is not about a single error; It’s about letting errors get out of control, undetected.

Related Articles

Leave a Comment