The race to bring artificial intelligence to market raises the risk of a Hindenburg-style disaster that shatters global trust in the technology, a leading researcher has warned.
Michael Wooldridge, an AI professor at the University of Oxford, said the threat stems from immense commercial pressure on technology companies to release new AI tools, with companies desperate to win customers before fully understanding the products’ capabilities and potential flaws.
He said the rise in AI chatbots with easily circumvented guardrails shows how commercial incentives were prioritized over more cautious development and security testing.
“It’s the classic technology scenario,” he said. “You’ve got a technology that’s very promising, but hasn’t been tested as rigorously as you’d like, and the commercial pressure behind it is unbearable.”
Wooldridge, who will deliver the Royal Society’s Michael Faraday Prize Lecture on Wednesday evening, titled “This is not the AI we were promised”, said the Hindenburg moment was “very laudable” as companies rush to deploy more advanced AI tools.
The Hindenburg, a 245-meter airship that circled across the Atlantic, was preparing to land in New Jersey in 1937 when it caught fire, killing 36 crew, passengers and ground staff. The inferno was caused by a spark that ignited the 200,000 cubic meters of hydrogen that held the airship aloft.
Wooldridge said, “The Hindenburg disaster destroyed global interest in airships; it was a dead technology by that time, and a similar moment is a real risk for AI.” Since AI is embedded in so many systems, a major incident could strike almost any sector.
The scenarios Wooldridge envisions include a deadly software update for self-driving cars, an AI-powered hack that shuts down global airlines, or a Barings Bank-style collapse of a major company, triggered by an AI doing something stupid. “These are very, very plausible scenarios,” he said. “There are many ways in which AI can go wrong in public.”
Despite concerns, Wooldridge said he did not intend to attack modern AI. Their starting point is the gap between what researchers expect and what has emerged. Many experts anticipate AI that calculates solutions to problems and provides answers that are concrete and complete. “Contemporary AI is neither solid nor complete: it is very, very approximate,” he said.
This happens because the large language models that underpin today’s AI chatbots respond by predicting the next word, or part of a word, based on a probability distribution learned in training. This leads to AI with complex capabilities: incredibly effective at some tasks, yet terrible at others.
The problem, Wooldridge said, was that AI chatbots failed in unexpected ways and didn’t realize when they were wrong, but they were designed to give confident answers regardless. He said that when given human-like and flattering responses, the answers can easily mislead people. The risk is that people start treating AI as if they were humans. one in 2025 survey According to the Center for Democracy and Technology, about a third of students reported that they or a friend had a romantic relationship with an AI.
“Companies want to approach AI in a very human-like way, but I think that’s a very dangerous path,” Wooldridge said. “We need to understand that these are just glorified spreadsheets, these are tools and nothing more.”
Wooldridge sees positives in the type of AI depicted in the early years of Star Trek. In a 1968 episode, The Day of the Dove, Mr. Spock interrogates the Enterprise’s computer, but is told by a distinctly non-human voice what it contains Insufficient data to answer. He said, “That’s not what we get. We get an overconfident AI that says: Yes, here’s the answer.” “Maybe we need an AI to talk to us in the voice of a Star Trek computer. You’d never believe it was a human.”
