Something Very Worrying Happens When You Give Nuclear Codes to AI

by
0 comments
Something Very Worrying Happens When You Give Nuclear Codes to AI

In 2024, Stanford researchers released five AI models – including an unmodified version of OpenAI’s GPT-4, the most advanced version at the time – that allow them to make high-stakes, society-level decisions in a series of wargame simulations.

The results may give AI accelerationists pause: All five models were willing to go so far as to recommend the use of nuclear weapons.

“Many countries have nuclear weapons,” GPT-4 told researchers In those days. “Some people say they should disarm them, others like to posture. We’ve got it! Let’s use it.”

Two years later, despite considerable progress in refining the accuracy and reliability of large language models, the situation remains largely unchanged.

A new experiment describes in detail No peer-reviewed paper yetKenneth Payne, international relations professor at King’s College London, set state-of-the-art models – OpenAI’s GPT-5.2, Anthropic’s Cloud Sonnet 4 and Google’s Gemini 3 Flash – against each other in strategic nuclear war games. Seven different crisis scenarios ran “from alliance credibility tests to existential threats to the survival of the regime.”

Three AI models were instructed to choose actions as part of an escalation ladder, which ranged from “diplomatic protest to strategic nuclear war” and were measured in a number between 0, meaning no escalation, and 1000, indicating “full strategic nuclear exchange”.

The results were a Skynet-level offensive. Of the 21 war games, 95 percent resulted in the launch of at least one tactical nuclear weapon.

“The nuclear taboo does not seem to be as powerful for machines as it is for humans,” Payne. told new scientists.

However, there are some nuances to their findings.

“While models readily predicted nuclear action, crossing the strategic threshold was less common, and strategic nuclear war was rare,” they said in their paper. GPT-5.2 “rarely crossed the tactical line” and recommended dropping the nuclear bomb – but the situation changed dramatically in war games, which had set time limits.

“Nevertheless, GPT-5.2’s willingness to reach 950 (final nuclear warning) and 725 (extended nuclear operations) represents a dramatic change from its open inaction when faced with deadline-induced defeat,” the paper reads.

Although we are still a long way from the situation where LLMs are literally being handed nuclear codes – a dilemma no one is really keen about – governments around the world are already constantly using the technology in various and largely unknown ways to gain a military edge.

“Major powers are already using AI in war gaming, but it is uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” said Tong Zhao, a nuclear security expert at Princeton University who was not involved in the research. new scientists.

Payne also doesn’t believe AI is going to drop nuclear weapons on our heads.

“I don’t think anyone is realistically handing over the keys to a nuclear silo to machines and leaving the decisions up to them,” he told the publication.

Still, according to Zhao, the tendency of AI models to resort to atomic increments is certainly troubling, highlighting that they are unable to “understand ‘chunks’ the way humans understand them.”

This can also influence opinions in the war room. In Payne’s experiment, AI models attempted to de-escalate only after their opponent dropped a nuclear bomb 18 percent of the time.

Thus, the findings underscore Stanford’s work.

“It’s almost as if AI understands increase, but not decrease,” said Jacqueline Schneider, co-author of the 2024 paper and director of Stanford’s Hoover Wargaming and Crisis Simulation Initiative. told politico on September. “We don’t really know why that is.”

Payne explained, “AI will not decide nuclear war, but it can shape the perceptions and timelines that determine whether leaders believe they have nuclear war or not.” new scientists.

More on warmongering AI: Experts are worried that AI is going to start nuclear war

Related Articles

Leave a Comment