‘Humanity needs to wake up’ to the dangers of AI, says Anthropic chief

by
0 comments
'Humanity needs to wake up' to the dangers of AI, says Anthropic chief

Stay updated with free updates

Humanity “needs to wake up” to the potentially catastrophic risks posed by powerful AI systems in the coming years, according to Anthropic boss Dario Amodei, whose company is one of those pushing the boundaries of technology.

In an essay of nearly 20,000 words posted on Monday, Amodei described the risks that could emerge from the uncontrolled growth of technology – from massive job losses to bioterrorism.

Amodei wrote, “Humanity is about to be entrusted with almost unimaginable power and it is deeply unclear whether our social, political and technological systems have the maturity to wield it.”

In the essay one of the most powerful entrepreneurs in the AI ​​industry issued a stern warning that safeguards around AI are inadequate.

Amodei has outlined the risks that could arise from the advent of “powerful AI” – systems that will be “much more capable than any Nobel laureate, politician or technologist” – which he estimates is likely to occur in the next “few years”.

Among those risks is the potential for individuals to develop biological weapons capable of killing millions of people or “in a worst-case scenario even destroying all life on Earth”.

Amodei wrote, “A troubled loner (who) might carry out a school shooting, but probably not build a nuclear weapon or spread the plague… will now be elevated to the competence level of a PhD virologist.”

He also raises the potential for AI to “go rogue and dominate humanity” or empower authoritarians and other bad actors, leading to “global totalitarian dictatorships.”

Amodei, whose company Anthropic is the main rival to ChatGate maker OpenAI, has clashed with President Donald Trump’s AI and crypto ‘tsar’ David Sachs over the direction of US regulation.

He also compared the administration’s plan to sell advanced AI chips to China to selling nuclear weapons to North Korea.

Trump signed an executive order last month to hamper state-level efforts to regulate AI companies, and last year published an AI action plan planning to accelerate American innovation.

In EssayAmodei warned of widespread job losses and “concentration of economic power and wealth” in Silicon Valley as a result of AI.

“That’s the trap: AI is so powerful, such a wonderful prize, that it’s very difficult for human civilization to put any restrictions on it,” he said.

In an indirect reference to the controversy surrounding Elon Musk’s Grok AI, Amodei wrote that “some AI companies have shown a disturbing negligence toward the sexual exploitation of children in today’s models, which makes me doubt that they will show the inclination or ability to address the risks of autonomy in future models”.

AI security concerns such as bioweapons, autonomous weapons, and malicious state actors featured prominently in public discussion in 2023, partly driven by warnings from leaders like Amodei.

That year, the UK government held an AI security summit at Bletchley Park, where countries and laboratories agreed to work together to counter such risks. The succession meeting is scheduled to be held in India in February.

But political decisions around AI are being driven by a desire to seize the opportunities it presents rather than minimize the risks of new technology, according to Amodei.

“This hesitation is unfortunate, because technology doesn’t care about what’s fashionable, and we are much closer to real danger in 2026 than in 2023,” he wrote.

Amodei was an early employee of OpenAI, but left to co-found Anthropic in 2020 after clashing with Sam Altman over OpenAI’s direction and AI guardrails.

Anthropic is in talks with groups including Microsoft and Nvidia and investors including Singapore’s sovereign wealth funds GIC, Coatue and Sequoia Capital about a funding round of $25 billion or more, valuing the company at $350 billion.

Related Articles

Leave a Comment