Oxford researcher warns AI is heading towards Hindenburg-style disaster

by
0 comments
Oxford researcher warns AI is heading towards Hindenburg-style disaster

Illustration by Tag Hartman-Simkins/Futurism. Source: Getty Images

Is the AI ​​bubble about to burst? Will this set the economy on fire? If you believe a leading expert’s warning that the industry could be headed for a Hindenburg-style disaster, both analogies may be apt.

“The Hindenburg disaster destroyed global interest in airships; from that point on it was a dead technology, and a similar moment is a real risk for AI,” said Michael Wooldridge, professor of AI at the University of Oxford. told Guardian.

It may be hard to believe now, but before the German airplane crashed in 1937, large-scale dirigible aircraft once represented the future of globe-spanning transportation, in an era when commercial airplanes, if you’ll allow it, hadn’t really flown yet. And the Hindenburg, the world’s largest airplane at the time, was the industry’s greatest achievement – ​​as well as a propaganda vehicle for Nazi Germany.

At over 800 feet long, it was not far off the length of the Titanic – another giant whose name became synonymous with disaster – and regularly carried dozens of passengers on trans-Atlantic voyages. However, all those ambitions evaporated when the ship suddenly caught fire while attempting to land in New Jersey. The massive fireball was attributed to a serious flaw: the hundreds of thousands of pounds of hydrogen it contained had been ignited by an unfortunate spark.

This inferno was filmed, photographed and broadcast around the world in a media frenzy that sealed the future of the airship industry. Can AI keep up with this? investment of more than one trillion dollarsSame head? This is not unimaginable.

“It’s the classic technology scenario,” Wooldridge told the newspaper. “You’ve got a technology that’s very promising, but hasn’t been tested as rigorously as you’d like, and the commercial pressure behind it is unbearable.”

Wooldridge suggests that perhaps AI could be responsible for some catastrophic spectacle, like a deadly software update for self-driving cars, or a bad AI-driven decision that collapses a major company. But their main concern is the glaring security flaws that still exist in AI chatbots despite their widespread deployment. In addition to being pitifully weak guardrails and wildly unpredictable, AI chatbots are designed to sycophant, imbuing human-like personalities and keeping users engaged.

Together, these can encourage the user’s negative thoughts and lead them into mental health spirals filled with delusions and even a complete break from reality. These episodes of so-called AI psychosis have resulted in stalking, suicide, and murder. AI’s ticking time bomb is not a payload of combustible hydrogen, but millions of potentially psychosis-inducing conversations. OpenAI alone admitted to ChatGPIT that more than half a million people every week were having conversations that showed signs of psychosis.

“Companies want to approach AI in a very human-like way, but I think that’s a very dangerous path,” Wooldridge said. Guardian. “We need to understand that these are just glorified spreadsheets, these are tools and nothing more.”

If AI has any place for us in the future, it needs to be that same cold, impartial assistant – not the slick friend who pretends to have all the answers. According to Wooldridge, a shining example of this is how the Enterprise’s computer in an early episode of “Star Trek” They say There is “insufficient data” to answer a question (and in a voice that is robotic, not attractive.)

“That’s not what we get. We get an overconfident AI that says: Yes, here’s the answer,” he explained. Guardian. “Maybe we need an AI to talk to us in the voice of a ‘Star Trek’ computer. You’d never believe it was a human.”

More on AI: It turns out that constantly telling workers that they will be replaced by AI has serious psychological effects.

Related Articles

Leave a Comment