decision, First reported by wall street journal on thursdayDefense contractors will be barred from working with the government if they use Claude, Anthropic’s AI program, in their products, citing a source familiar with the matter. Although this designation is typically applied to foreign companies with ties to hostile governments, this is the first time an American company has publicly received this label.
At the center of the conflict is Anthropic’s refusal to allow the Pentagon to use the cloud for two purposes: autonomous lethal weapons without human oversight, and mass surveillance. The Pentagon has argued that Anthropic’s demands for control over government use would put too much power in the hands of a private company, while Anthropic was not assured that the government would respect their red lines. However, negotiations turned ugly, as the Pentagon threatened to use supply-chain risk designation if Anthropic refused to comply with their demands. After Anthropic announced last Thursday that they would not comply, the Pentagon followed through on that threat. (The Pentagon did not comment on the record. Anthropic did not immediately respond to a request for comment.)
It is unclear how widespread the Pentagon will be to enforce this designation. On Friday, when he announced his intention to label Anthropic a threat, Defense Secretary Pete Hegseth said that any company that does “any commercial activity” with Anthropic — even outside of its work for the Pentagon — will have their defense contracts canceled. At the time, Anthropic responded that such a broad application of the law would be illegal.
Hegseth and President Donald Trump set a 6-month deadline for Anthropic to remove the cloud from government systems, but it won’t be easy, especially from the military. After the US attacked Iran over the weekend, a targeted missile strike killed Supreme Leader Ayatollah Ali Khamenei, reports indicated. Cloud-powered intelligence tools played major role in mission success.
