A leading artificial intelligence expert has rolled Back His timeline for AI destruction, saying that it will take longer than he initially predicted for AI systems to be able to code autonomously and thus accelerate their own evolution towards superintelligence.
Former OpenAI employee Daniel Kokotajlow sparked an energetic debate in April by releasing ai 2027A scenario that imagines uncontrolled AI development leading to the creation of superintelligence, which – after tricking world leaders – destroys humanity.
scenario won rapidly fans And detractors. US Vice President JD Vance attends Reference AI 2027 discussed America’s artificial intelligence arms race with China in an interview last May. Gary Marcus, an emeritus professor of neuroscience at New York University, called The piece is a “work of fiction” and its various conclusions are “pure science fiction mumbo jumbo”.
Timeline for transformative artificial intelligence – sometimes called AGI (artificial general intelligence), or AI capable of replacing humans in most cognitive tasks – has become a fixture in communities dedicated to AI security. The release of ChatGPT in 2022 significantly accelerates these timelines officials And experts Predicting the arrival of AGI within decades or even years.
Kokotajlo and his team named 2027 as the year when AI would achieve “fully autonomous coding”, although they said this was the “most likely” estimate and that some of them had longer timelines. Now, some doubts are emerging about the imminence of AGI, and whether the term is meaningful in the first place.
“Many others have been pushing back their timelines over the past year as they realize how volatile AI performance is,” said Malcolm Murray, an AI risk management expert and one of the authors of the international AI security report.
“For a scenario like AI 2027 to happen,[AI]will need a lot of practical skills that are useful in real-world complexities. I think people are starting to realize the enormous inertia in the real world that will delay full societal transformation.”
“The term AGI was only remotely understandable when AI systems were very narrow – like playing chess and playing Go,” said Henri Papadatos, executive director of French AI nonprofit SafarAI. “Now we have systems that are already quite common and the term doesn’t have as much meaning.”
Kokotajlo’s AI 2027 relies on the idea that AI agents will completely automate coding and AI R&D by 2027, leading to an “intelligence explosion” in which AI agents create smarter and smarter versions of themselves, and then – in a possible endgame – kill off all humans by the mid-2030s to make room for more solar panels and datacenters.
However, in their update, Kokotajlo and his co-authors revised their expectations for when AI might be able to code autonomously, saying it would likely happen in the early 2030s, as opposed to 2027. The new forecast sets 2034 as the new horizon for “superintelligence” and contains no estimate of when AI might destroy humanity.
“It appears that things are moving a little slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and they are a little longer now too.” wrote Cocotajlo in a post on X.
Creating AI capable of conducting AI research is still the goal of leading AI companies. Sam Altman, CEO of OpenAI, Said Said in October that it was his company’s “internal goal” to have an automated AI researcher by March 2028, but added: “We may completely fail in this goal.”
Andrea Castagna, a Brussels-based AI policy researcher, said there were many complexities that the dramatic AGI timeline does not address. “The fact that you have a superintelligent computer focused on military activity does not mean that you can integrate it into the strategic documents we have been compiling for the last 20 years.
“The more we develop AI, the more we see that the world is not science fiction. The world is much more complex than that.”