after the A school was razed to the ground in an airstrike and killed 165 Iranian elementary students and staff, the Pentagon has refused to say whether the attack was suggested by an AI system.
The strange possibility is not as far-fetched as it seems. according to bombshell reporting by wall street journalThe Pentagon used Anthropic’s cloud AI models in planning military strikes on Iran over the weekend — and is likely still using it as the Trump administration’s attacks continue.
However, in the opening salvo, either the US or Israel – available information Pointing to the east – Shajareh Tayyebeh Girls School, located in the south-Iranian city of Minab, was destroyed. Most of the people died in the strike Al Jazeera reportsThe primary students were aged seven to 12 years. At least 95 other people were injured in the attack.
To make matters even more serious reporting from middle east eye Shajareh Taybeh was attacked a second time after the initial missile attack, injuring first responders and parents who had come to pick up their children. That so-called “double tap” reminds of the American bombing Civilian boats in Venezuela under Donald Trump and air strikes in pakistan Under Barack Obama.
Given the alleged use of AI by the United States to select at least some military targets in Iran, a big question remains unanswered: Did the US use the cloud to decide whether to destroy an elementary school?
When? futurism When contacted by the Pentagon regarding the use of AI in recent military operations – specifically the targeting of the Shajareh Taybeh Girls School – we were referred to US CENTCOM, one of eleven unified commands under the Pentagon’s umbrella.
“We don’t have anything for you on this at this time,” Centcom said.
Claims that the US Army is using the cloud for war killed more than 1,000 people It may seem very hard to believe in less than a week. Unfortunately, it’s a tune we’ve heard before.
Back in April of 2024, a investigation by +972 magazine Revealed that the Israeli military had taken advantage of an AI system called “Lavender” to select targets in its war on Gaza, in the same way the Pentagon is reportedly using the cloud in Iran. According to six Israeli intelligence officers, Lavender played a “central role” in destruction of gaza and its population, identifying at least 37,000 Palestinians as targets for aerial killing.
As told by an intelligence officer +972Lavender’s decisions – which often included suggestions to attack targets in their homes – were treated by military operators “as if it were a human decision”.
It is difficult to overstate the moral consequences of such a system. an israeli military source told Guardian: “I would invest 20 seconds for each goal at this level, and accomplish dozens of them every day. Other than having the seal of approval, it had zero added value to me as a human being. This saved a lot of time.”
This trend predicts a brutal new era of warfare in which it is no longer clear whether humans, or at least humans alone, are making life-and-death decisions about where to deploy the deadliest arsenal in human history – even when the casualties are dozens of schoolchildren.
Do you have any information about how the US military is using AI? Send us a tip: tips@futurism.com – we can keep you anonymous.
More information on military operations: Polymarket quietly removes bets on nuclear explosion
