On February 5, Anthropic released its most powerful artificial intelligence model, Cloud Opus 4.6. New features of the model are the ability to coordinate teams of autonomous agents – multiple AIs that divide up the work and complete it in parallel. Twelve days after the release of Opus 4.6, the company dropped the Sonnet 4.6, a cheaper model that almost matches the coding and computer prowess of Opus. As late as 2024, when Anthropic first introduced models that could control computers, they could barely operate a browser. Now Sonnet 4.6 can navigate web applications and fill out forms with human-level capability, according to anthropic. And both models have working memory large enough to hold a small library.
Enterprise customers now make up about 80 percent of Anthropic’s revenue, and the company closed a $30 billion funding round last week at a $380 billion valuation. By every available measurement, Anthropic is one of the fastest-growing technology companies in history.
But behind the big product launches and evaluations, Anthropic faces a serious threat: The Pentagon has signaled it could designate The company is a “supply chain risk” – a label often associated with foreign adversaries – unless it lifts its restrictions on military use. Such a designation could effectively force Pentagon contractors to divert sensitive work to the cloud.
On supporting science journalism
If you enjoyed this article, consider supporting our award-winning journalism Subscribing By purchasing a subscription, you are helping ensure a future of impactful stories about the discoveries and ideas shaping our world today.
Tensions increased after January 3, when US special operations forces raided Venezuela and captured Nicolás Maduro. wall street journal The report said forces used the cloud during the operation through Anthropic’s partnership with defense contractor Palantir — and Axios reported that the episode escalated an already fraught conversation about what, exactly, the cloud could be used for. When an Anthropic executive reached out to Palantir to ask whether the technology was used in the raid, the question raised immediate concern at the Pentagon. (Anthropic has disputed that the outreach was intended to signal disapproval of any specific operation.) A senior administration official told Axios that Defense Secretary Pete Hegseth is “close” to breaking the relationship., He added, “We’re going to make sure they pay the price for forcing themselves on us like this.”
This confrontation highlights a question: Can a company founded to prevent AI disaster maintain its ethical lines when its most powerful tools—autonomous agents capable of processing vast datasets, identifying patterns, and acting on their findings—are running inside classified military networks? Is “security first” AI compatible with a customer who wants systems that can reason, plan, and act at a military level?
Anthropic has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons. CEO Dario Amodei said Anthropic “will support national defense by all means except those that would make us look like our autocratic adversaries.” Other major labs—OpenAI, Google and XAI—have agreed to loosen security measures for use in the Pentagon’s unclassified systems, but their tools are not yet running inside the military’s classified networks. The Pentagon has demanded that AI be available for “all lawful purposes.”
Friction tests Anthropic’s central thesis. The company was founded in 2021 by former OpenAI executives who believed the industry was not taking security seriously. He positions Claude as the moral choice. In late 2024 Anthropic made the cloud available on the Palantir platform with cloud security levels up to “Secret” – making the cloud, by public accounts, the first major language model to work inside a classified system.
The standoff is now raising questions about whether security-first is a consistent identity when incorporating a technology into classified military operations and whether red lines are actually possible. “The words sound simple: illegal surveillance of Americans,” says Amelia Probasco, a senior fellow at Georgetown’s Center for Security and Emerging Technology. “But when you come down to it, there’s a whole army of lawyers trying to figure out how to interpret that phrase.”
Consider the example. Following the Edward Snowden revelations, the US government defended the bulk collection of phone metadata – who called whom, when and for how long – arguing that this type of data does not have the same privacy protections as the content of conversations. The privacy debate then was about discovery of those records by human analysts. Now imagine an AI system interrogating huge datasets – mapping networks, detecting patterns, flagging people of interest. The legal framework we have is built for an era of human review, not machine-level analysis.
“In some sense, any kind of large-scale data collection that you ask an AI to look at is mass surveillance by simple definition,” says Peter Asaro, co-founder of the International Committee for Robot Arms Control. Axios reported that the senior official argued that there is too much gray area around Anthropic’s restrictions and that it is impractical for the Pentagon to negotiate individual use cases with the company. Asaro offers two readings of that complaint. The generous interpretation is that surveillance is virtually impossible to define in the age of AI. The pessimistic thing, Asaro says, is that “they really want to use these for mass surveillance and autonomous weapons and don’t want to say that, so they call it a gray area.”
Anthropic’s other red line, regarding autonomous weapons, is that the definition is narrow enough to be manageable – systems that select and engage targets without human supervision. But Asaro finds the gray zone more troubling. He points to the Israeli military’s Lavender and Gospel systems, which are reported to use AI to generate massive target lists that go to human operators for approval before carrying out strikes. “You have, essentially, automated the targeting element, which is something (that) we are very concerned about and (that) is closely related, even if that falls outside the narrow strict definition,” he says. The question is whether Claude, working inside Palantir’s systems over classified networks, could do something similar – processing intelligence, identifying patterns, surfacing individuals of interest – without anyone at Anthropic being able to say exactly where the analytical work ends and the targeting begins.
The Maduro operation tests precisely that distinction. “If you’re collecting data and intelligence to identify targets, but humans are deciding, ‘Okay, this is the list of targets we’re actually going to bomb’ — then you have the level of human supervision that we need,” Asaro says. “On the other hand, you’re still relying on these AIs to choose these targets, and how much investigation and how much digging into the validity or legality of those targets is a different question.”
Anthropic is trying to draw the line more narrowly — between mission planning, where the cloud can help identify bombing targets, and the mundane task of processing documentation. “There are all these boring applications of big language models,” Probasco says.
But the capabilities of Anthropic’s models may make those distinctions difficult to maintain. Opus 4.6’s agent teams can split up a complex task and work in parallel – an advancement in autonomous data processing that could transform military intelligence. Both Opus and Sonnet can navigate the app, fill out forms, and work across all platforms with minimal oversight. These features that drive Anthropic’s business dominance are what make cloud inside a distributed network so attractive. A model with a large working memory can also hold an entire intelligence dossier. A system that can coordinate autonomous agents to debug a code base can coordinate them to map a rebel supply chain. The more capable the cloud becomes, the thinner the line becomes between the analytical grunt work Anthropic is willing to support and the surveillance and targeting it has promised to deny.
As Anthropic pushes the boundaries of autonomous AI, the Army’s demand for those tools will only grow greater. Probasco fears that confrontation with the Pentagon creates a false binary between security and national security. “How about we have security And National security?” she asks.
