Anthropic’s chief scientist says we are rapidly approaching a moment that could ruin us all

by
0 comments
Anthropic's chief scientist says we are rapidly approaching a moment that could ruin us all

Illustration by Tag Hartman-Simkins/Futurism. Source: Getty Images

Anthropic’s Chief Scientist Jared Kaplan is making some dire predictions about humanity’s future with AI.

The choice is ours, in its outline. According to Kaplan, for the moment, our destiny is mostly in our hands – unless we decide to hand the proverbial baton to the machines.

Such a point is rapidly approaching, he says in a New interview with GuardianKaplan estimates that by 2030, or even 2027, humanity will have to decide whether to take the “ultimate risk” of training AI models themselves, The upcoming “intelligence explosion” could take technology to new heights, giving rise to a so-called artificial general intelligence (AGI) that is equal to or beyond human intelligence and benefits mankind with all kinds of scientific and medical advances, Or it could allow the power of AI to exceed our control, leaving us at the mercy of its whims,

“It seems like kind of a scary process,” he told the newspaper. “You don’t know where you’re going to end up.”

Kaplan is one of several prominent figures to warn about potentially disastrous consequences for the AI ​​field. Geoffrey Hinton, one of the three so-called godfathers of AI, famously declared that he regretted his life’s work, and has often warned about how AI could distort or even destroy society. OpenAI Sam Altman predicts that AI will eliminate entire categories of labor. Kaplan’s boss, CEO Dario Amodei, recently caution AI could take over half of all entry-level white-collar jobs, and he accused his competitors of “sugarcoating” how badly AI will disrupt society.

It seems as if Kaplan agrees with his boss’s assessment of Jobs. AI will be able to do “most white-collar work” in two to three years, he said in the interview. And while he is optimistic that we will be able to keep AI engaged with human interests, he is also concerned about the possibility of allowing powerful AI to train other AI, an “extremely high-risk decision” that we will have to make in the near future.

“That’s the thing that we probably look at as the biggest decision or the scariest thing… Once there’s no one involved in the process, you really don’t know,” he said. Guardian“One is, do you lose control over it? Do you even know what the AI ​​is doing?”

To an extent, larger AI models are already used to train smaller AI models in a process called distillation, which allows the smaller AI to essentially catch up with its larger teacher. However, Kaplan is concerned about what he calls iterative self-improvement, in which AIs learn without human intervention and make substantial leaps in their abilities.

Whether we let that happen depends on some heavy philosophical questions about the technology.

“The main question there is: Are AI good for humanity?” Kaplan said. “Will they be helpful? Will they be harmless? Do they understand people? Will they allow people to continue to have agency over their lives and around the world?”

While the dangers of AI are real, Kaplan’s warnings warrant some careful unpacking. For one, they hold to the premise that AI is already one of the most consequential and important technologies to date, even if existing AI systems represent the powerful autonomous machines that many cautionary sci-fi stories have warned about – or at least a worthwhile step towards getting there. The saying goes that there is no such thing as bad publicity, and you could also add that condemnation, especially in the AI ​​industry, is its own form of publicity. Visions of apocalypse distract attention from its more mundane consequences, such as AI Staggering environmental damageIts flouting copyright lawsand its addictive, hallucination-inducing cognitive effects.

Furthermore, many AI experts, including some of the field’s seminal figures like Yann LeCun, do not believe that the LLM architecture underpinning AI chatbots is capable of evolving into the all-powerful, intelligent systems that figures like Kaplan are so concerned about. Its It’s also not clear whether AI is actually increasing productivity In the workplace, some research suggests the opposite – involving several notable efforts by bosses to replace their employees with AI agents, but then re-hiring them when the tools fail.

Kaplan acknowledged that it is possible that AI’s capabilities may become stagnant. He thought, “Maybe the best AI ever is the AI ​​we have right now.” “But we don’t really think so. We think it will get better.”

More on AI: Google CEO says we’ll all have to suffer as AI takes society through the woodchipper

Related Articles

Leave a Comment