Illustration by Tag Hartman-Simkins/Futurism. Source: Getty Images
They may be responsible for creating the AI technology that many fear will eliminate jobs – if not the entire human race – but they are at least as anxious and sad as the rest of us about where this is all going.
At NeurIPS, one of the larger AI research conferences held at the San Diego Convention this year, visions of AI destruction were clearly in the minds of many scientists in attendance. But are they seriously considering the risks of AI, or are they too busy fantasizing about scenarios they read about in sci-fi novels? This is a question raised in a new piece by Alex Reisner atlanticThose who attended NeuroIPS found that many people talked extensively about the risks of AI, particularly about the creation of a hypothetical artificial general intelligence, but ignored the technology’s mundane shortcomings.
Reisner wrote, “Many AI developers are thinking about the technology’s most concrete problems, while the public conversation about AI – including among the most prominent developers themselves – is dominated by hypotheticals.”
A researcher guilty of this? University of Montreal researcher Yoshua Bengio is one of the three so-called “godfathers” of AI whose work was foundational to creating the massive language models that fueled the industry’s relentless boom. Bengio has spent the past few years raising the alarm about AI security, and recently launched a nonprofit called LaZero to encourage the safe development of the technology.
“Bengio was concerned that, in a possible dystopian future, AI might betray its creators and that ‘those who would have very powerful AI might misuse it for political gain, in terms of influencing public opinion,'” Reisner recalled.
But the giant “did not mention how the fake videos are already influencing the public discussion,” Reisner said. “Neither did he meaningfully address the growing chatbot mental-health crisis, or the plundering of the arts and humanities. In his view, catastrophic harms are ‘three to 10 or 20 years’ away.”
Reisner wasn’t the only one to notice this disconnect. “Are we having the wrong nightmare about AI?”. In a keynote address titled, sociologist Zeynep Tufekci warned that researchers are missing the forest for the trees by focusing so much on the risks posed by AGI, a technology we don’t even know it will ever be possible to create, and for which there is no agreed upon definition. When someone in the audience complained that Tufekci was already aware of the immediate risks taken, such as chatbot addiction, Tufekci responded, “I don’t really watch these discussions. I keep seeing people discussing mass unemployment versus human extinction.”
It is a very distant thing to make it. The discourse around AI safety is often dominated by apocalyptic rhetoric, which is also promoted by the billionaires who make the stuff. OpenAI CEO Sam Altman has predicted that AI will eliminate entire categories of jobs, create a crisis of widespread identity fraud, and is preparing for doomsday, when AI systems will retaliate against the human race by potentially spreading a deadly virus.
And Bengio isn’t the only AI “godfather” with regrets. British computer scientist Geoffrey Hinton – who received the Turing Award in 2018 along with Bengio and former Meta chief AI scientist Yann LeCun – has cast himself as an Oppenheimer-like figure in the field. In 2023, he famously said he regretted his life’s work after leaving his role at Google, and recently had a discussion with Senator Bernie Sanders, where he spoke at length on the myriad risks of technology, including the destruction of jobs and militarized AI systems advancing empire.
Reisner made an ironic observation: that the name Neurips, short for “Neural Information Processing System”, is reminiscent of a time when scientists vastly underestimated the complexity of our brain’s neurons and compared them to the processing done by computers.
“Regardless, a central feature of the culture of AI is an obsession with the idea that computers have a mind,” he wrote. “Anthropic and OpenAI have published reports with language about chatbots being ‘disloyal’ and ‘dishonest’ respectively. In AI discourse, science fiction often trumps science.
More on AI: Anthropic’s chief scientist says we are rapidly approaching a moment that could ruin us all