Deloitte says businesses are deploying AI agents faster than security protocols

by
0 comments
Deloitte says businesses are deploying AI agents faster than security protocols

Nikada/E+/Getty Images

Follow ZDNET: Add us as a favorite source On Google.


ZDNET Highlights

  • The adoption of AI agents among businesses is growing rapidly.
  • Meanwhile, development of security protocols is slow.
  • Deloitte recommends implementing inspection procedures.

According to Deloitte’s latest State of AI in the Enterprise report, businesses are expanding the use of AI agents faster than they can build adequate guardrails.

Published on Wednesday and based on a survey of more than 3,200 business leaders in 24 countries, the study found that 23% of companies are currently making “at least moderate” use of AI agents, but that figure is projected to reach 74% over the next two years. In contrast, the share of companies that report not using them at all, currently 25%, is expected to decline to just 5%.

Plus: 43% of employees say they’ve shared sensitive information with AI – including financial and customer data

However, the rise of agents in the workplace – AI tools trained to perform multistep tasks with less human supervision – is not being complemented with adequate guardrails. Only about 21% of respondents told Deloitte that their company currently has strong security and oversight mechanisms in place to prevent potential harm caused by agents.

“Given the rapid adoption trajectory of the technology, this may be a significant limitation,” Deloitte wrote in its report. “As agentic AI moves from pilots to production deployments, it will be essential to establish strong governance to capture value while managing risk.”

what could go wrong?

Companies such as OpenAI, Microsoft, Google, Amazon, and Salesforce have marketed agents as productivity-boosting tools, the main idea being that businesses can offload repetitive, low-risk workplace operations to them while human employees focus on more important tasks.

Also: Bad Feelings: How an AI Agent Coded Its Way to Disaster

However, greater autonomy brings greater risk. Unlike more limited chatbots, which require careful and constant prompting, agents can interact with a variety of digital tools, for example, signing documents or making purchases on behalf of organizations. This leaves more room for error, as agents can behave in unpredictable ways – sometimes with disastrous consequences – and can be vulnerable to quick injection attacks.

zoom out

The new Deloitte report is not the first to point out that AI adoption is eclipsing security.

A study published in May found that the majority of IT professionals surveyed (84%) said their employers were already using AI agents, while only 44% said they had policies in place to regulate the activity of those systems.

Also: How OpenAI is now protecting ChatGPAT Atlas from attacks – and why security is not guaranteed

Another study published in September by the nonprofit National Cyber ​​Security Coalition showed that while a large number of people are using AI tools like Chatbots on a daily basis, including in the workplace, most of them are doing so without receiving any kind of security training from their employers (for example, without teaching them about the privacy risks that come with using chatbots).

And in December, Gallup published the results of a survey that showed the use of AI tools had increased among individual workers since last year, with nearly one-quarter (23%) of respondents saying they did not know whether their employers were using the technology at an organizational level.

outcome

Of course, it would be unreasonable for business leaders to demand that there be absolutely bulletproof guardrails around AI agents at this early stage. Technology always evolves faster than our understanding of how it can go awry, and as a result, policy at every level lags behind deployment.

Also: How these state AI safety laws change the face of regulation in the US

This is especially true with AI, as the amount of cultural hype and economic pressure that is driving technological developers to release new models and organizational leaders to begin using them is arguably unprecedented.

But early studies like Deloitte’s new State of Generative AI in the Enterprise report point to a dangerous divide between deployment and security that could form as industries increase their use of agents and other powerful AI tools.

Also: 96% of IT professionals say AI agents are a security risk, but they’re deploying them anyway

For now, oversight should be the watchword: businesses should be aware of the risks associated with their internal use of agents, and have policies and procedures in place to ensure they don’t go off the rails – and, if they do, the resulting damage can be managed.

“Organizations need to establish clear boundaries for agents’ autonomy, defining which decisions agents can make independently versus which require human approval,” Deloitte recommends in its new report. “Real-time monitoring systems that track agent behavior and flag anomalies are essential, as are audit trails that capture the full range of agent actions to help ensure accountability and enable continuous improvement.”

Related Articles

Leave a Comment