ElevenLabs insures agents, targets enterprises’ fears

by
0 comments
ElevenLabs insures agents, targets enterprises' fears

As more enterprises want assurance that the AI ​​tools they use will not fail or cause unwanted effects, AI voice vendor ElevenLabs has launched an insurance policy for AI agents.

Voice AI Lab, a 2022 startup, said on February 11 that it has achieved Artificial Intelligence usage certification-Level 1.AIUC-1) Certification, a process that audits the ability of an AI system to address risks such as data privacy, security, safety, reliability, accountability, and social impacts. The vendor said that, based on the results of the test, insurers can now offer an AI-specific insurance policy that underwrites Actions of AI Agents Deployed by ElevenLabs customers. However, it is not clear which companies are willing to underwrite the policies.

ElevenLabs’ move to insure its agents is a response to enterprises’ concern about the risks posed by AI technology and their desire to take more measured risks. Despite the fact that many AI vendors including Adobe, Google, IBM and AWS offer indemnity, a legal concept whereby one party agrees to pay another party for losses, damages or liabilities, can help take insurance protection for AI agents a step further as it promises protection against actions the AI ​​agent commits wrong.

Connected:OpenAI’s latest platform targets enterprise customers

“It’s a reflection of where we are in the enterprise today,” said Futurum Group analyst David Nicholson. “We went from a fear of being lost to a fear of messing up.”

what does insurance mean

He said enterprises now regularly consider whether implementing a given tool would cause harm, unlike a few years ago, when the emphasis was on implementing AI technology regardless of the potential for harm.

To ElevenLabs, who has seen this Technology used maliciously In the past (for example, the vendor’s technology was used in a robocall incident in 2024 in which President Joe Biden’s voice told New Hampshire residents not to vote in the Democratic primary), insuring its agents means it has taken steps to ensure its technology is somewhat secure, Nicholson said.

“They are confident that if it is used correctly according to all the warnings on the label, it will not do any bad things,” he said. “This is not a trivial matter.”

However, insurance for AI agents may also promote a false sense of confidence, because even if AI companies offer insurance, it is difficult to prove that the damage was not caused by user error.

a better way

Nicholson said the indemnity guarantees that even if a bad actor uses an AI tool in a way it shouldn’t be used, enterprises are protected. However, it is not clear how effective compensation policies are.

Connected:Opinions divided on Moltbuk social network for AI agents

Furthermore, an uninsured AI agent is not high on the list of enterprises’ concerns, said Lian Jae Su, an analyst at Omdia, a division of Informa TechTarget. He said enterprises are more concerned about the performance and accuracy of AI agents.

“It’s really about the enterprise’s ability to deploy the agent with the right AI model and the right cloud infrastructure and then be able to feed the AI ​​with the right contacts using internal data,” Su said. “You probably want to address all of these fundamental challenges before you commit to an AI agent with an insurance policy.”

However, an underwriting agent may appeal to enterprises in highly regulated industries such as the financial sector. This is a way for the text-to-voice vendor to differentiate itself in a market where it competes with multiple vendors, including OpenAI.

Related Articles

Leave a Comment