As Anthropic draws a line in the sand over the Pentagon’s refusal to relax some of its security guardrails, AI vendors and enterprises are watching the situation closely, sparking a broader discussion about who has the authority to define the safe use of AI.
In a statement dated February 27, the deadline to respond to the Pentagon’s demand to flex its AI policy, Anthropic CEO Dario Amodei said he “cannot in good conscience accede to their request.”
While he believes AI can help protect americaAmodei said that right now, “we believe that AI can undermine, rather than defend, democratic values,” particularly in the form of mass domestic surveillance and powering fully autonomous weapons.
However, earlier in the week, Anthropic released its “responsible scaling policy“This move will allow it to focus more on transparency and less on ensuring that the models it releases are not harmful to society. The contradiction between Anthropic’s decision to lower its RSP while refusing to bow to the government shows the pressure AI vendors are under to remain competitive while ensuring their models are safe. It also signals to enterprises that although it is changing some security rules, it can still keep a firm stance.
hold the line
Anthropic isn’t the only vendor feeling an increase in stress. OpenAI and Google employees are also pressuring their employers to follow Anthropic’s lead, even circulating a petition urging them to take a similar stance.
“Frontier AI companies are no longer neutral infrastructure providers; they are strategic actors whose models have dual-use military relevance,” said Kashyap Kompela, CEO and founder of RPA2AI Research. “Like chip vendors, we are seeing the normalization of AI vendors as geopolitical stakeholders. The question is not whether AI will be used in defense contexts; it already is, but who sets the terms of that use?”
Indeed, Anthropic’s entanglement with the government signals how quickly the relationship between AI vendors and government is evolving – as well as the complexity of these dynamics.
On the surface, Anthropic could lose its $200 million contract and some opportunities with the Pentagon if the Pentagon classifies it as a supply chain risk, Kompela said, but underneath, what’s at stake is much more subtle.
“Underneath it is a negotiation over sovereignty and control,” he said. “Governments retain authority over legitimate military application. AI vendors are attempting to maintain some degree of standard governance over their systems after sale.”
For its part, the Pentagon could follow through with its threat and force Anthropic into compliance. Defense Production Act of 1950, a federal law that requires companies to receive priority government contracts.
“This administration sometimes plays a big game and then backs out,” said Michael Bennett, associate vice chancellor for data science and AI strategy at the University of Illinois Chicago. “It is also sensitive to the fact that these companies may also be present in other places around the world. They may not necessarily get hammered immediately, but there is something at stake for the administration as well.”
Bennett said that the goal of the Trump administration is to ensure that America will win ai raceWhich means Anthropic may stand in the way of those goals. On the other hand, if the administration pushes back against Anthropic, a startup, it could impact how other vendors respond to government pressure.
Vendors also run the risk of giving in to government demands and potentially causing an exodus of employees who disagree with the decision.
“(Anthropic understands) that its most valuable asset is not the weight of the cloud, but the programmers who are working with it, who are training it,” Bennett said. “That’s probably a big reason why the CEO is standing firm because he knows that’s what the employees who work for Anthropic expect.”
Finally a possible compromise
The government and Anthropic will eventually reach an agreement, and Kompela believes the deal will likely be some kind of compromise.
He said, “Either the contract language will be refined to enumerate specific prohibitions while maintaining operational flexibility or accept anthropogenic narrow assurances that will allow both sides to claim alignment.” “A complete breakdown would be strategically costly for both sides.”
during this time For AI vendorsThis situation provides a proving ground to see if they can maintain some semblance of control, Kompela added.
“The resolution will signal how much leverage private (sector) AI vendors hold when interacting with a sovereign power,” he said.
