Anthropic abandons its huge security pledge that was allegedly the company’s entire purpose

by
0 comments
Anthropic abandons its huge security pledge that was allegedly the company's entire purpose

In 2021, a different group of former OpenAI employees Founded a new startup, AnthropicBuilding an AI model with a renewed focus on security after realizing their employer had gone astray. OpenAI was originally founded on for-profit principles and a commitment to transparency, but then took a billion-dollar investment from Microsoft and made its technology closed-source, prompting an exodus.

Now, Anthropic may follow the same path as its rivals. It was revealed on Tuesday new version Its responsible scaling policy that for the first time abandons its core security commitment made in 2023: stopping training and refusing to deploy an AI system if it cannot guarantee it has appropriate security guardrails that meet stringent internal standards.

The new feeling among the company’s leadership is that it has become an unnecessary chain around its feet.

“We felt that stopping training AI models wouldn’t really help anyone,” said Jared Kaplan, Anthropic’s chief science officer. told Time in an interview. “With the rapid advancement of AI, we didn’t really feel there was any point for us to make unilateral commitments… if competitors were moving forward.”

The updated policy, it is fair to say, clearly contradicts the integrity of the organization raison d’etre. In an industry riddled with outrageous promotion and a lax attitude toward ethics, Anthropic has presented itself as the adult in the room. There’s no better example of its carefully crafted security-focused image than CEO Dario Amodei description of mythology In the summer of 2022, he called for Anthropic to abstain from releasing powerful AI models that he knew would change the world because he was so concerned about its risks; Months later, OpenAI released ChatGPT, and stole all the headlines.

So why such a big reversal? Anthropic gives several reasons in its announcement. One of them is the “anti-regulatory political environment.” Amodei has long pushed for stronger AI regulations, an ambition that more or less faded after the Trump administration took office. In particular he criticized Trump’s efforts Place blanket restrictions on states’ ability to pass their own AI regulation – Which means AI companies will only be beholden to very weak federal laws – Amodei Earnings continuous attacks by administration figures, who have accused him of spreading fear.

And so with no strong legal framework in place, there was nothing to force its rivals to play by the same rules that Anthropic supposedly follows. Anthropic argues that this means that any security research and measures it takes will become outdated by default as the rest of its industry continues to create even more powerful models.

Its new policy argues, “If one AI developer halted development to implement safeguards while others moved forward with training and deploying AI systems without strong mitigations, the result could be a world that is less safe.” “The developers with the weakest security will set the pace, and responsible developers will lose their ability to conduct security research.”

Perhaps there is some truth in that argument. But it’s a spurious justification for Anthropic to remove a central pillar of its security act. Anthropic can’t really control what its competitors do, but is that a reason to stop even pretending to lead by example? After all, the regulatory environment may change. And the demands for AI security will not go away. Arguably, they will only increase as the industry’s contradictory promises become more apparent and the risks of the technology become even more consequential.

The timing of policy change also cannot be ignored. Morally speaking, Anthropic has a $200 million contract with the Pentagon that it signed last summer to deploy the cloud in the military. But that vital funding tap is now in jeopardy, as Trump officials reportedly threatened to shut down Anthropic over the company’s insistence that its technology not be used for mass surveillance and autonomous weaponry. Defense Secretary Pete Hegseth met with Amodei on Tuesday and gave the CEO an ultimatum, Axios informed: Lightening Anthropic’s AI safeguards would make them more accountable to the military, or the Pentagon would either dismantle the company and declare it a “supply chain risk”, or invoke the Defense Production Act to force Anthropic to share its AI technology.

But if the new policy is a capitulation by Anthropic, Chief Science Officer and co-founder Kaplan doesn’t see it that way.

“I don’t think we’re making any kind of U-turn,” Kaplan said. Time.

More on Anthropic: Anthropic got angry at DeepSeek for copying its AI without permission, which is very ironic when you consider how it created the cloud in the first place

Related Articles

Leave a Comment