Hello and welcome regulatornewspaper for the verge Clients who get an inside look at the growing existential clash between technology and politics in Washington. If this was forwarded to you, may I interest you in it? full membership of The Verge For only $40 per year? You’ll get much more than just disastrous scenarios. we cover non-existent fun things like legoVery.
Do you work somewhere related to government, technology, and existential threats? Send all suggestions here tina.nguyen+tips@theverge.comOr on my Signal account @tina.nguyen19.
To put it mildly, it was not a cold weekend.
For a few hours on Saturday, I thought about the Anthropic-Pentagon contract dispute that appeared to be over on Friday night when the Secretary of Defense pete hegseth Declaring that the company was a supply-chain risk would be left behind in the news cycle. You know, because right around 1 o’clock Saturday morning, US launches 100 military fighter jets and directs them towards Iran. I’ve been messaging sources late at night about OpenAI’s new contract with the Pentagon and asking if Sam Altman Did Get red lines on mass surveillance and autonomous lethal weapons, but by the time I woke up, the United States had assassinated the Ayatollah Ali Khamenei and several other Iranian leaders openly and uncharacteristically launched airstrikes on Tehran in broad daylight.
However, it soon became clear that Anthropic was also part of the story. on sunday, wall street journal reported this Cloud-powered intelligence tools were used by many military command centers during the strike, citing sources familiar with the situation. It is unknown how the Pentagon used the cloud in this specific operation in Iran, and such information will be classified and known only by those directly involved. But magazine wrote that the Pentagon, the only AI system that had security clearance to handle classified information as of last week, was already deeply involved in technology that “conducted intelligence assessments, target identification and simulated combat scenarios” — technology that was, apparently, used in the Iran attack.
A few conclusions can be drawn from this: First, the entire conflict was never about Anthropic posing a real national security risk (but the public could already see that). But second, while AI has not yet reached the “fully autonomous lethal weapon” stage, it has been developed to a level of sophistication sufficient to launch an impressively precise (though inconveniently extra-legal) attack on a foreign leader. This is even more impressive considering that Iran was under Almost complete, government-imposed internet blackout For several months, there was virtually no digital connection to the outside world.
i hit Hamza ChaudharyAI and national security leadership at the nonpartisan Future of Life Institute, for his long-term perspective on Operation Epic Fury. He said that both sides of the conflict were already using artificial intelligence in their warfare – Iran has deployed AI-assisted missiles in recent months – And while the US clearly prevailed in this scenario, it was the prelude to what he described as the “dyadic automated warfare problem”: two AI systems effectively talking to each other through dynamic action, each adapting and reacting faster than human decision makers could.
However, Chowdhury’s nightmare scenario suggests the end of nuclear deterrence as a tool for global stability:
“Recent analyzes of the 2025 India–Pakistan and Iran–Israel conflicts have shown that AI makes second-strike forces more transparent and thus more vulnerable, and while nuclear arsenals still impose a limit on full-scale war, AI lowers the floor for sub-range aggression and reduces political reaction time. If an adversary believes its nuclear deterrent is becoming visible (e.g., submarines track doable, mobile launcher findable, command infrastructure mappable) The rational response is to expand the arsenal or shift to a launch-on-alert posture.
“Experts have called this an ‘arms race threat to stability’: the risk that one side might seek breakout advantages in advanced technology, triggering complementary efforts by the other. This is not an imaginary future problem. The technologies that made Operation Epic Fury possible are the same technologies that are gradually making nuclear deterrence more fragile. We have no international governance framework that adequately addresses this.”
So what exactly is in that magical, red line-honoring contract that Altman was bragging about? At present, we do not know, except that OpenAI wrote about the contract on its company blog. Although it was essentially a press release, the post included excerpts from the company’s claim that is the contract itself, stating that “AI systems will not be used for unrestricted surveillance of the private information of US persons consistent with these authorities,” citing several pre-existing security laws. But that too did not pass the legal smell test. As my colleague Hayden Fields reported yesterday:
OpenAI appears to rely heavily on existing legal limitations. It added that its Pentagon agreement states that “For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose.”
But this is not reassuring. In the years after 9/11, U.S. intelligence agencies developed a surveillance system that they determined fell within the legal boundaries of OpenAI, including several large-scale domestic spying operations (apparently with the intent to do so). extremely aggressive International ones.)
Here’s another piece of imaginary legalese, pointed out by a reader: “Unrestricted surveillance” isn’t even an actual legal term, much less is it cited in the relevant official laws OpenAI is pointing to.
Did Hegseth fire a gun?
At first glance, the President Donald Trump’s 3:47 PM Friday’s post on Truth Social seemed like the final verdict on Anthropic. But a careful reading reveals that Trump may indeed have been ready to negotiate. Nowhere in his post does he threaten to punish other companies for being Anthropic customers, and this sentence below contains Trump’s only real legal threat to Anthropic. Important operative words are in bold:
“Anthropogenics do their job better, and are helpful during periods out of this phase, Or I will use the full power of the Presidency to bring them into compliance, with major civil and criminal consequences.
White House watchers immediately saw this signal as a strong de-escalation strategy – an action can Should have been taken, but had not been taken yet – this was meant to buy several months of time for the Pentagon and Anthropic. This strategy only makes sense in the context of how Trump used social media as a carrot during his time as president. And stick to. He will publicly post an aggressive threat online, such as the announcement of new tariffs, an investigation of a company, or a nuclear annihilation of North Korea. Then a deal is struck behind the scenes, and within a few weeks, Trump is extolling the virtues of his enemies on Truth Social — even sometimes acknowledging them — and he experiences no political backlash at all, because Trump just does This all the time.
This understanding lasted for about an hour and a half before Hegseth posted His The decision to officially designate Anthropic as a supply-chain risk, threatening to punish defense contractors who do “any commercial business” with the company, and declaring that their decision was “final”. Generally speaking, the Secretary of Defense has the power to make this decision unilaterally, and does not even need to announce it to the public or obtain signature from the President. But by combining those words, Hegseth threw the entire tech industry for a loop.
To date, no one I’ve talked to – industry, policy, or otherwise – has any idea what “any commercial activity” actually means, or what kind of punishment they would actually face if they continued to contract with Anthropic for non-defense purposes. Meanwhile, Anthropic said Friday that supply-chain regulations are at risk Implemented only for use of cloud in Department of Defense And did not spread beyond those limits.
If anyone knows how “any commercial activity” with Anthropic could be appropriately (and legally, if possible) restricted? against Defense contractors, please send me your contact information.
Several people reiterated to me last week that egos and the whole “illegal punishment” thing aside, there was a high-level intellectual argument for the Pentagon’s position: A private company should not be able to dictate what the United States government, an entity that was elected by the American people, does with its technology. but i can’t believe that The person who made this argument most compellingly – on social media, at least – was jeremy levinan undersecretary in the State Department, who is a completely separate government entity From the Defense Department.
During this time, emil michaelThe Uber corporate cautionary tale turned Pentagon CTO is leading negotiations with both Anthropic and OpenAI, mostly by posting nonstop ad hominem attacks on X-calling. Dario Amodei A “False (with) a God-complex,” Among other things, often at midnight. (Fun fact: Michael has written more X posts in the last few days about Anthropic than about the Iran attack.)
Last week, before all this madness happened, I attended a sold-out taping of The Hopkins Forum open to debate Series on the topic: “Will AI make work obsolete?” I was mostly there for the guest panelists – what world do you see Andrew Yang Going against the co-founder of Facebook Chris Hughes? – But the debate itself was quite compelling.
The disastrous situation was argued by Yang and MIT professor and Nobel-winning economist simon johnson: AI is going to cause massive job losses, and there are no mechanisms in place to prevent massive social upheaval. The optimistic position was argued by Hughes and Rumman ChaudharyA data and social scientist and co-founder of the nonprofit Humane Intelligence: There Is A future where AI can enhance human work and improve human lives. Both sides were in full agreement that unchecked corporate greed will probably lead to AI being a disastrous scenario, but it was refreshing to hear someone make an optimistic case for AI.
This episode will go live on Friday Open for debate substackBut in the meantime, I learned that if you say “david saxIn a room full of tech people in Washington, DC, someone will immediately come up to you and start complaining about it without any prompting.
