Anthropic says it can’t ‘in good conscience’ allow Pentagon to remove AI checks military

by
0 comments
Anthropic says it can't 'in good conscience' allow Pentagon to remove AI checks military

anthropic Said On Thursday it could not “in good conscience” comply with the Pentagon’s demand to remove security precautions from its artificial intelligence models and give the US military unfettered access to its AI capabilities.

The Defense Department had threatened to cancel the $200 million contract and deem Anthropic a “supply chain risk,” a designation with serious financial implications, if the company did not comply with the request by Friday.

Chief Executive Dario Amodei said in a statement that threats from Defense Secretary Pete Hegseth would not change the company’s position and that he hoped Hegseth would “reconsider”.

“Our strong priority is to continue serving the department and our warfighters — with our two requested safeguards in place,” he said. “We stand ready to continue our work to support the national security of the United States.”

At the heart of the standoff between the Department of Defense and anthropology is a disagreement over how the AI ​​company will be allowed to use its product, the Cloud. The Pentagon has demanded that Anthropic turn down security guardrails and allow any legitimate uses of the cloud, while Anthropic has pushed back against allowing the cloud to be used in large-scale domestic surveillance or autonomous weapons systems that can kill people without human input.

After months of controversy and government pressure, Hegseth allegedly given Amodei will have to wait until Friday evening to agree to the Pentagon’s demands or face punitive action.

Whether Anthropic would accept was seen as a high-profile test of its claim to be the most security-conscious of the major AI companies, as well as whether any part of the AI ​​industry would step up against government wishes to use the technology for controversial, potentially lethal purposes.

In his statement, Amodei said that using AI for autonomous weapons and large-scale domestic surveillance “is beyond the bounds of what today’s technology can do safely and reliably”.

The Defense Department has handed out several lucrative deals in recent years to tech companies to build or integrate AI technology into US military systems. In July last year, Anthropic was one of several big tech companies, including Google and OpenAI, to receive contracts worth up to $200m with the DoD. What set Anthropic apart, and intensified its conflict with the Pentagon, is that until this week it was the only AI model approved for use in the military’s classified systems. (Elon Musk’s xAI reached an agreement It will also be used in classified systems beginning this week.)

Anthropic’s technology has reportedly already arrived has been used for military applicationsThat includes the US capture of Venezuelan leader Nicolas Maduro last month, highlighting the growing use of AI in conflict. The growth of autonomous weapons technology, such as drones that can operate even when disconnected from a human operator, has also heightened long-standing concerns about how AI will be used in life-and-death situations.

Anthropic and Amodei have long been some of the industry’s most prominent advocates for regulation and safety precautions in AI development, even signing a deal with the military this week. a basic policy Don’t release new AI models without first guaranteeing their safety. Amodei called for regulation, and History of political opposition In the face of Donald Trump, Hegseth’s pledge to remove “awareness” from the armed forces and pursue aggressive military policies has come against him.

If Hegseth follows through on his threat to classify Anthropic as a supply chain risk, it would be a major blow to the AI ​​company. The designation, which is commonly used against foreign adversaries, would prevent other vendors doing business with the U.S. military from using Anthropic’s products.

Related Articles

Leave a Comment