Inside Anthropic’s existential conversation with the Pentagon

by
0 comments
Inside Anthropic's existential conversation with the Pentagon

Anthropic’s weeks-long battle with the Defense Department has been based on social media posts, warning public statements and direct quotes in the news media from unnamed Pentagon officials. But the future of the $380 billion AI startup boils down to just three words: “Any lawful use.” New terms that OpenAI and xAI have Allegedly The agreement, already agreed, will give the US military carte blanche to use the services for mass surveillance and lethal autonomous weapons, AI that has full power to track and kill targets and with no humans involved in the decision-making process.

Pentagon CTO Emil Michael, formerly a top executive at ridehailing company Uber, has raised concerns over government threats to designate Anthropic as a “supply chain risk,” according to two people familiar with the negotiations, which have soured the talks. This classification is usually reserved for threats to national security, including malicious foreign influence or cyber warfare. Anthropic CEO Dario Amodei will Allegedly Secretary of Defense Pete Hegseth met at the Pentagon on Tuesday and an unnamed defense official described it as a “useless or nonsense meeting.”

It is unprecedented for the Pentagon to issue this threat to an American company. but the Pentagon public form Issuing this threat is even more strange.

For security purposes, the Pentagon does not publicly disclose which companies are on these lists, let alone publicly threaten those companies if their views do not match. In fact, Jeffrey Gertz, a senior fellow at the Center for a New American Security (CNAS), told The Verge He under current federal regulations The Pentagon could classify Anthropic as a risk without notifying the public or explaining why. “This specifically takes the additional step of characterizing them as a national security risk and preventing other companies from doing business with Anthropic, which goes above and beyond what is required here.”

The conflict is over Anthropic’s enforcement of its “Acceptable Use Policy.”

If the classification were made official, it would void Anthropic’s $200 million contract with the Pentagon, but it would have a more devastating impact on Anthropic’s overall bottom line. Major defense contractors and technology companies, such as AWS, Palantir, and Anduril, use Anthropic’s cloud in their work for the Pentagon, due to the fact that it was the first AI model approved for using classified information. Put more bluntly: If Anthropic is labeled a “supply chain risk,” any company that currently works with the military or hopes to ever receive a military contract will have to forego Anthropic’s AI systems, which are considered some of the best in the industry. (The evening before Amodei’s scheduled meeting with Hegseth, the Pentagon confirmed It had signed an agreement to use GrokControversial AI model created by Elon Musk’s xAI in classified systems. The Pentagon had no immediate response following a request for comment.)

It can be applied in a very narrow sense – or in an extremely broad sense. “I suspect the more logical explanation would be a narrower definition, that anthropogenic cannot be used as part of a specific description of work for the Pentagon,” Gertz said. “But based on some of the reporting and the attempt to make it seem like a punitive move against Anthropic, it’s worth considering both of those scenarios.”

Although the Pentagon and their media allies have gone on a campaign to brand Anthropic as “woke”, they have yet to make any actual allegations about security vulnerabilities or the possibility of spying. Instead, according to people familiar with internal discussions, the conflict is over Anthropic enforcing its “acceptable use policy.”

A source familiar with the situation, who requested anonymity due to the sensitive nature of the talks, said The Verge Anthropic has made it very clear to the government about its red lines, and there are two narrow things the company will not agree to: autonomous kinetic operations and mass domestic surveillance. The latter, the source said, is due to the fact that “laws have not caught up to what AI can do” and that it could violate American civil liberties. As for pre-lethal autonomous weapons – the source said the technology “is not yet for fully autonomous weapons, with no humans involved.”

Hamza Chaudhry, head of AI and national security at the Future of Life Institute, a nonpartisan research group focused on AI governance, said Anthropic’s red lines already reflect current government directives that have not been repealed.

“DOD Directive 3000.09 requires that all autonomous weapons systems be designed so that commanders and operators can ‘exercise an appropriate level of human judgment over the use of force,’ and the Political Declaration on the Military Use of AI initiated by the U.S. government and supported by the 50 states establishes this principle,” he pointed out. The Verge above text. “And DoD Directive 5240.01, reinforced by provisions in the FY2017 NDAA and the Trump-era Responsible AI Implementation Pathway, prohibits intelligence components from collecting information on US persons except under specific legal authorities such as FISA or Title 50.

“Anthropic’s Acceptable Use Policy reflects these very lines, and until the Pentagon formally renounces, clarifies, or updates these policy positions, the bigger question is whether the company can be forced to deviate from a policy to which the government itself has committed in principle.”

Michael, who negotiates on behalf of the Pentagon, is a Trump appointee and Under Secretary of Defense for Research and Engineering, a position often described as the Pentagon’s chief technology officer. (First source) described Michael, who once built an aggressive reputation as Uber’s chief business officer and bragged about conducting opposition research on journalistsAs a “tough negotiator”. (Michael was ousted from Uber in 2017 following an action by the company’s board of directors Examining Company Culture Sexual harassment, sparked by his and several officers’ visits to a South Korean escort bar.)

“It’s really a matter of principle for Emil,” said a second person familiar with the matter, adding that Michael was unhappy that a private company was attempting to block the government’s use of his technology. It’s unclear whether the White House or venture capitalist and powerful AI and crypto mogul David Sachs approved Michael’s hardball strategy in advance.

Currently, Anthropic’s “Acceptable Use Policy” is included in a $200 million contract signed with the Defense Department last July. In its announcement, the company mentioned “responsible AI” five times. “At the core of this work is our strong belief that the most powerful technologies carry the greatest responsibility,” they wrote, adding that in the context of government, “where decisions affect millions of people and the stakes could not be higher,” that responsibility was “essential” to ensuring that “AI developments strengthen democratic values ​​globally while maintaining technological leadership to protect against authoritarian abuse.”

“The designation will require every defense contractor seeking government work to certify that they have removed all anthropogenic technology from their systems.”

But in January, Hegseth published a memorandum Declaring that the Department will “become an ‘AI-first’ fighting force across all components” and that “any lawful use” language must be included in any AI service procurement contract within 180 days, including existing guidance.

In Hegseth’s memo, he repeatedly highlighted that the department would prioritize speed at all costs, writing that the country must “eliminate barriers to data sharing… (and) approach risk tradeoffs, ‘equity’ and other subjective questions as if we were at war.” He also said that when it comes to the development and use of AI agents, the department will integrate them “from campaign planning to kill chain execution”, as well as “turn intel into weapons in hours.”

Hegseth repeatedly prioritized speed over safety and potential errors: “We must recognize that the risks of not moving fast enough outweigh the risks of imperfect alignment.” He later reiterated in the memo, writing that “responsible AI” would see major changes in the department, both on the battlefield and in the ranks of the military. “Diversity, equity and inclusion and social ideology have no place in DOW,” he wrote. He further said that the department should also “use models free from policy constraints that could limit legitimate military applications.” Similar to Trump’s anti-“woke AI” executive OrderHegseth announced that the benchmark for model fairness will be a new primary purchasing criterion for AI services.

OpenAI, XAI, and Google immediately renegotiated their $200 million contract with the Pentagon to align with Hegseth’s memo. But none of those companies’ models have an Impact Level 6 security classification, meaning ChatGPT, Grok and Gemini couldn’t immediately replace the cloud if Anthropic were blacklisted — a single-supplier vulnerability that would have an adverse impact on the Pentagon.

“The cloud is the only frontier AI model operating entirely on the classified Pentagon network, deployed through Palantir’s AI platform and Amazon’s top secret cloud, meaning it sits at the center of workflows that most other models can’t yet access,” Choudhary said. “The designation will require each defense contractor seeking government work to certify that they have removed all anthropic technology from their systems.”

This has given Anthropic leverage in its confrontation with the Pentagon, which reportedly became more intense after the company learned that Its model was used in the capture of Venezuelan President Nicolas MaduroViolation of their existing agreement.

Anthropic technically cannot attempt to coordinate or unite with other AI labs that are being offered new terms, even if they were willing to agree, as this would be against federal procurement rules. But as the fight comes into the public eye, tech workers, AI workers, and others who currently or formerly worked in the tech industry have expressed frustration that other companies are not fighting for the same terms as Anthropic. Others thought it would only be a matter of time before Anthropic gave up.

“This would be a really good time (for other labs) to say, ‘Wait, what are you doing with our technology?'” said William Fitzgerald, a former Google employee who now runs an advocacy firm called The Worker Agency. “The people in these AI labs have a lot of power. They’re small teams, and they’re still shaping who they’re going to be… I think they can justify their valuations without doing military work. There are other ways you can run a business without killing people in your business model.”

Follow topics and authors To see more like this in your personalized homepage feed and get email updates from this story.



Related Articles

Leave a Comment