Pentagon issues threat to Anthropic

by
0 comments
Pentagon issues threat to Anthropic

Michael M. Santiago/Getty Images

on weekends, wall street journal informed That the US military used Anthropic’s cloud AI chatbot to invade Venezuela and kidnap the country’s President Nicolas Maduro.

The exact details of the cloud’s use are unclear, but the incident demonstrated the Pentagon’s preference for the use of AI, and how tools already available to the public can be incorporated into military operations. And when Anthropic found out about it, its reaction was cold.

An Anthropic spokesperson remained silent in a statement on whether “the cloud, or any other AI model, was used for any specific operation, classified or otherwise.” WSJBut noted that “any use of the cloud – whether in the private sector or at the government level – is required to comply with our usage policies, which govern how the cloud can be deployed.”

The deployment reportedly happened through an AI company Partnership with secret military contractor Palantir. also anthropological Signed contracts worth up to $200 million The deal was signed with the Pentagon last summer as part of the military’s broader adoption of the technology, along with OpenAI’s ChatGPT, Google’s Gemini and XAI’s Grok.

Whether the Pentagon’s use of the cloud broke any Anthropic rules is unclear. cloud’s Usage Guidelines Prevent it from being used to “facilitate or promote any act of violence,” “develop or design weapons,” or “surveillance.”

Either way, Trump administration officials are now considering severing ties with Anthropic over the company’s insistence that mass surveillance of Americans and fully autonomous weapons remain off limits. axios reports.

“Everything is on the table,” including dialing back the partnership, a senior administration official said. axios. “But there has to be an orderly replacement for them, if we think that’s the right answer.”

As if that phrase wasn’t strong enough, a senior officer – it’s not clear if this is him – followed up with a more nasty comments.

“This will be very difficult to resolve, and we will make sure they pay a price for forcing our hand like this,” he told the site.

Anthropic reportedly reached out to Palantir to find out if the cloud was used during the attacks on Venezuela axioS’ sourcing, signs of a broader cultural conflict and growing concerns about the technology’s involvement in military operations.

Late last month, Anthropic was already in conflict with the Pentagon over limits on its $200 million contract, including how many law enforcement agencies like Immigration and Customs Enforcement and the Federal Bureau of Investigation could deploy it, As per earlier reporting by WSJ.

Anthropic CEO Dario Amodei has repeatedly warned inherent risk The technology his company is working on requires more government oversight, regulation, and guardrails. They have also raised concerns over the use of AI in autonomous lethal missions and domestic surveillance.

one in long essay Posted earlier this year, Amodei argued that large-scale AI-facilitated surveillance should be considered a crime against humanity.

Meanwhile, Defense Secretary Pete Hegseth does not share these speculations. Earlier this year, he Promise The Pentagon “will not employ AI models that won’t allow you to fight a war,” the sources said in a comment. WSJ Related discussion with Anthropic.

Anthropic continued to emphasize in a statement to both countries that it is “committed to using frontier AI in support of US national security”. WSJ And axios.

However, it appears that its issues with the Pentagon are going over well with its non-government users, who are fearful of the technology being involved in military operations.

“Good job Anthropic, you have become a top closed (AI) company in my books,” one reads top post On the Cloud subreddit.

More on AI: AI confusion is leading to domestic abuse, harassment and stalking

Related Articles

Leave a Comment