Anthropic’s ongoing battle with the Pentagon over the military’s use of its AI systems flared up this week around a hypothetical nuclear attack scenario. new reporting From Washington Post.
The cloud AI builder has frustrated the Pentagon by objecting to the use of its systems for autonomous weapons and mass surveillance of US citizens. A defense official called for the debate to end in chief WaPoThe Pentagon’s technology chief floated a highly hypothetical proposal: Would Anthropic let the Army use the cloud to help shoot down nuclear-armed intercontinental ballistic missiles?
Anthropic CEO Dario Amodei’s response clearly irked Pentagon leaders. “You can call us and we’ll work on it,” is how the defense source described it WaPo’s Word.
An Anthropic spokesperson denied that Amodei had made that response and called the account “patently false.” He said the company has agreed to allow the cloud to be used for missile defense.
Whatever the case, it is clear that the parties are failing to talk face to face. The standoff has escalated over the Pentagon’s demand that Anthropic loosen its security measures around the cloud, which is making the company uneasy.
For months, people inside and outside the DoD and the Trump administration have put pressure on Anthropic, a company focused on security founded by former OpenAI employees. Amodei has criticized the administration’s efforts to curb AI regulation, including a proposed ban on all state-level AI regulation. Trump officials like AI czar David Sachs fired back, calling Amodei “woke” and accusing him of “fear-mongering.”
Tension has increased in recent weeks. Amodei was during a tense meeting with Defense Secretary Pete Hegseth on Tuesday Reportedly presented a series of ultimatums. If Anthropic did not allow the Army unrestricted use of its AI, the Pentagon could declare it a supply chain risk and cut Anthropic from all current and future contracts, including an outstanding $200 million contract to deploy the cloud in the Army signed last summer. The Pentagon also threatened to use the Defense Production Act to force Anthropic to hand over its AI technology, a Cold War-era law whose use in this context would be legally dubious and almost certainly challenged.
In a statement Thursday, Amodei said that despite Hegseth’s threats, Anthropic could not agree to a “final” proposal for unrestricted use of the Pentagon’s cloud system. The denial angered defense officials. At the exchange, Secretary of Defense for Research and Engineering Emil Michael accused Amodei of having a “God-complex”. Add That Amodei “wants nothing more than to try to personally control the U.S. military and is OK with endangering the security of our country.”
Pentagon spokesman Sean Parnell Emphasis on X The Pentagon has “no interest in using AI to conduct mass surveillance of Americans” or in using AI “to develop autonomous weapons that operate without human involvement.” Instead, Parnell claimed, the Pentagon is only seeking to use Anthropic’s AI for “all lawful purposes.”
“We will not let any company dictate the terms of how we make operational decisions,” Parnell said. “They have until 5:01 pm ET on Friday to make a decision. Otherwise, we will end our partnership with Anthropic and consider them a supply chain risk.”
It is not clear what the next step will be from both sides. But Anthropic can no longer stand alone in his fight. Axios reported this Sam Altman, CEO of rival OpenAI, wrote in a memo to employees that he would draw the same line in the sand if the military used its own AI products as Anthropic.
Altman wrote, “This is no longer just an issue between Anthropic and (the Pentagon); it is an issue for the entire industry and it is important to make our stance clear.” “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-risk automated decisions.”
Anthropic may be getting additional reinforcements from elsewhere in Silicon Valley. Two coalitions of workers that include employees of Google, Microsoft, Amazon, and OpenAI have demanded their employers join Anthropic in refusing to allow unrestricted use of the military’s AI systems, bloomberg informed.
The nuclear scenario proposed by the Pentagon during talks with Anthropic, although highly hypothetical, underscores how deeply it intends to deploy AI technology. The US, along with other major powers such as France and China, have agreed on the need to involve a human being in all decisions regarding the use of nuclear weapons. But Paul Dean, vice president of the global nuclear program at the nonprofit Nuclear Threat Initiative, warned that an AI could still influence a human’s decision to press the big red button. WaPo. In recent war games, Major AI models, including Cloud, Gemini, and ChatGPT, opted against deploying nuclear weapons in most scenarios.
“It’s not just about making sure there’s a human being in the decision-making cycle,” Dean said. WaPo. “The question is, to what extent will AI influence human decision making?”
More on AI: Anthropic abandons its huge security pledge that was allegedly the company’s entire purpose
