The boss of AI startup Anthropic has said humanity is entering a phase of artificial intelligence development that will “test who we are as a species”, arguing that the world needs to “wake up” to the risks.
Dario Amodei, co-founder of Hit Chatbot Cloud and the company’s chief executive, expressed his fears. 19,000 words essay Title “Adolescence of Technology”.
Describing the advent of extremely powerful AI systems as potentially imminent, he wrote: “I believe we are entering a ritual both unsettling and inevitable, one that will test who we are as a species.”
Amodei said: “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political and technological systems have the maturity to wield it.”
The tech entrepreneur, whose company is reportedly worth $350bn (£255bn), said his essay was an attempt to “wake people up” as the world needed to “wake up” to the need for action on AI safety.
Amodei published the text as the UK government announced that Anthropic would help create chatbots providing career advice and assistance to job seekers in finding employment, as part of developing AI assistants for public services in general. Last week, the company published an 80-page “constitution” for the cloud in which it outlined how it wants to make its AI “broadly secure, broadly ethical.”
Amodei co-founded Anthropic in 2021 with other former OpenAI staff members, which developed ChatGPT. A leading voice for online safety known for consistently warning about the dangers of uncontrolled AI development, he wrote that the world was “much closer to real danger” in 2026 than in 2023, when the debate over the existential threat from AI rose up the political agenda.
He pointed to the controversy over erotic deepfakes created by Elon Musk’s Grok AI that flooded social media platform X over Christmas and New Year, including warnings that the chatbot was creating child sexual abuse material.
Amodei wrote: “Some AI companies have shown a disturbing negligence toward child sexual abuse in today’s models, which makes me skeptical that they will show the inclination or ability to address autonomy risks in future models.”
The Anthropic CEO said that powerful AI systems that can create their own systems autonomously may take one to two years.
They defined “powerful AI” as a model that was smarter than a Nobel Prize winner in fields such as biology, mathematics, engineering, and writing. It can give or take directions to or from humans, and although it “lives” on a computer screen, it can control robots and even design them for its own use.
Acknowledging that powerful AI may be “well ahead” of the two-year timeline, Amodei said the recent rapid progress made by the technology should be taken seriously.
He wrote, “If the exponential continues—which is not certain, but there is now a decade-long track record supporting it—then it may not take more than a few years for AI to be better than humans at essentially everything.”
Last year, Amodei warned that AI could halve the number of entry-level white-collar jobs and increase overall unemployment by 20% within the next five years.
In his essay, Amodei cautioned that the economic rewards from AI, such as productivity gains from eliminating jobs, could be so large that no one applied the brakes.
“That’s the trap: AI is so powerful, such a wonderful prize, that it’s very difficult for human civilization to put any restrictions on it,” he said.
However, Amodei said he was optimistic about the positive outcome. “I believe that if we act decisively and carefully, the risks can be addressed – I would go so far as to say that our chances are good. And on the other side of this is a vastly better world. But we need to understand that this is a serious civilizational challenge.”