As companies race to integrate artificial intelligence into their operations, a vital safety net could be quietly missing in the process.
Major insurance providers including AIG And wr berkeleyAre Regulatory permission is reportedly being sought Excluding AI liabilities from standard corporate policies. According to , the industry is moving towards limiting its exposure to what it regards as an “unpredictable and opaque” technology. financial Times,
This shift represents a fundamental reassessment of risk that could have massive implications for how and whether enterprises deploy AI agents and automated tools.
I discussed this developing trend Episode 183 of Artificial Intelligence Show With Paul Roetzer, founder and CEO of SmarterX and the Marketing AI Institute, who spent more than a decade working closely with the insurance industry.
Is it too risky to hide AI hallucinations?
Insurers are in the business of calculating risk, but measuring generative AI is proving difficult.
A proposal from WR Berkeley would reportedly ban claims related to any actual or alleged use of AI, including products sold by a company that consist only of tools. During this time, chub Agreed to cover certain risks but specifically excludes “widespread” incidents where the failure of a model affects multiple customers simultaneously; Insurers fear this scenario could lead to systemic, overall losses.
These steps follow a number of high-profile, costly incidents:
For insurers, these “hallucinations” and errors fall into a gray area that makes them too risky to underwrite under current standard liability or cyber policies.
an overlooked risk
For Roetzer, who owned a marketing agency for 16 years that worked extensively with insurance carriers and agent networks, this development highlights a blind spot for many business leaders.
“I’ve spent a lot of time thinking about the insurance industry for over a decade,” says Roetzer. “To be honest, I hadn’t really stopped and thought deeply about the implications of AI on insurance policies. But now that I’ve looked at the topic, my mind is racing.”
If insurers do not protect firms from AI risks, internal demands for reliability will skyrocket. Companies may be “shy” of adopting AI if they know that a single hallucination or agent mistake could result in uninsured, multi-million dollar liabilities.
AI agents can complicate things
The timing of these exclusions is particularly notable as the industry moves toward “agent” AI, systems that can take autonomous actions, execute code and make decisions without human intervention.
“There are definitely risks, especially as we start getting more and more involved in the agentic side of it, that I think most businesses haven’t considered yet with respect to their insurance,” Roetzer says.
While a chatbot answering a question incorrectly is problematic, an autonomous agent executing a financial transaction or modifying code creates a liability that standard business insurance is not designed to cover.
What can you do?
This trend is still in its early stages, but is progressing rapidly.
If you’re a business leader, now is the time to review your contracts and talk to your risk management teams. The assumption that your general liability or errors and omissions (E&O) policy covers your new AI tool may no longer be true.
“If you’re in the insurance field or if you do contracting for your company, this is something that’s probably very close to you,” Roetzer says.
As AI technology continues to accelerate, companies must work harder to keep up and keep themselves safe.
“It’s still an early trend, but it definitely looks like it could have a big impact over time,” Roetzer says.