AI technology leaders have a lot to gain by striking fear in the hearts of their investors. By portraying technology as an over-powering force that could easily bring humanity to its knees, the industry is hoping to sell itself as a panacea: a remedy for a situation it had a firm hand in bringing about.
In this case, Dario Amodei, co-founder and CEO of Anthropic, is back 19,000 words essay Posting on his blog he argued that “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political and technological systems have the maturity to wield it.”
In light of that existential threat, Amodei attempted to create a framework to “defeat” the risks posed by AI—which, by his own admission, could be “futile.”
He wrote, “Humanity needs to wake up, and this essay is an attempt – possibly futile, but worth trying – to wake people up.”
Amodei argued that “we are much closer to the real danger in 2026 than in 2023,” citing the risk of large job losses and “concentration of economic power” and wealth.
However, there is no incentive to invest in meaningful railings. In his essay, Amodei indirectly took a dig at Elon Musk’s Grok chatbot, which has been embroiled in a major controversy for creating non-consensual sexual images.
“Some AI companies have shown a disturbing negligence toward child sexual abuse in today’s models, which makes me skeptical that they will show the inclination or ability to address the risks to autonomy in future models,” he wrote.
The CEO also cited the risk of AI developing dangerous biological weapons or “improved” military weapons. An AI could “turn evil and dominate humanity” or allow countries to “use their advantages in AI to gain power over other countries”, leading to “the dangerous possibility of a global totalitarian dictatorship”.
Amodei lamented that we are unwilling to tackle these risks head-on, at least not yet. Amodei argued that in its current race to the bottom, the AI industry finds itself in a “trap”.
He wrote, “AI is so powerful, such a wonderful prize, that it is very difficult for human civilization to impose any restrictions on it.”
He wrote, “Taking time to carefully build AI systems so that they do not autonomously endanger humanity is in real tension with the need for democratic countries to stay ahead of authoritarian countries and not be subservient to them.” “But in turn, the same AI-enabled tools that are essential to fighting tyranny can, if taken too far, be used to create tyranny in our own countries.”
He argued, “AI-powered terrorism could kill millions through the misuse of biology, but overreacting to this risk could lead us toward an autocratic surveillance state.”
As part of a solution, Amodei renewed his calls Depriving other countries of the resources to create powerful AI. He even compared the US selling Nvidia AI chips to China to “selling nuclear weapons to North Korea and then bragging that the missile casings are made by Boeing and so the US is ‘winning’.”
Many questions remain about the actual risks of advanced AI, a topic that remains heavily debated Realists, skeptics, and supporters of technology. Critics have pointed out that leaders like Amodei often cite existential risks. beyond limitsespecially as technology improves seems to be slowing down.
We must also consider the broader context of Amodei’s eloquent warning. The CEO’s company is considering a large-scale shutdown, Billion dollar funding round At a valuation of $350 billion. In other words, Amodei has a huge financial interest in positioning himself as the solution to the risks cited in his essay.
More on Anthropic: Anthropic let an AI agent run a small shop and the result was unintentionally hilarious
