The Guardian’s view on AI in warfare: Iran conflict shows paradigm shift has already begun editorial

by
0 comments
The Guardian's view on AI in warfare: Iran conflict shows paradigm shift has already begun editorial

“NIn the future we will never move as slowly as we do now,” said UN Secretary-General Antonio Guterres. caution This week, addressing the urgent need to shape the use of artificial intelligence. The pace of technological development – ​​as well as geopolitical unrest – is blurring the gap between theoretical arguments and real-world events. a political dispute The US military’s AI capabilities have been matched by its unprecedented use in the Iran crisis.

AI company Anthropic emphasized this Could not remove security measures Preventing the Defense Department from using its technology for domestic mass surveillance or autonomous lethal weapons. The Pentagon said it has no interest in such uses – but such decisions should not be made by companies. Insultingly, the administration has not only fired Anthropic but blacklisted it as a supply-chain risk. OpenAI stepped in, insisting that it maintained the red lines announced by Anthropic. yet in an internal Feedback on user and employee feedbackIts CEO Sam Altman acknowledged that it does not control the Pentagon’s use of its products and that its management of the deal made OpenAI “opportunistic and sloppy”.

But as Nicole van Rooijen, executive director of Stop Killer Robots – which campaigns for human control in the use of force – has warned: “The issue is not just whether these weapons will be used, but also how their precursor systems are already changing the way wars are fought … Human control risks becoming an afterthought or a mere formality.”

The paradigm shift has already begun. Despite the controversy, Anthropic’s cloud has reportedly facilitated a massive and intense attack that has already killed more than an estimated thousands of civilians in Iran. This is the age of bombing “faster than the speed of thought”, experts told the Guardian this week, with AI identifying and prioritizing targets, recommending weapons and evaluating the legal basis for attack.

AI is not a prerequisite for civilian deaths, military errors, or irresponsibility. The US Secretary of Defense, Pete Hegseth, claims to have loosened the rules of engagement. It is the humans in the Pentagon who are avoiding questions about the deaths of 165 schoolgirls. American attack on a school In Iran on 28 February.

But – even without considering questions of AI’s inaccuracy and biases – the effects are clear to its users. An Israeli intelligence source noted its use in the war on Gaza: “The targets never end. You have another 36,000 waiting.” Another said he spent 20 seconds assessing each target, adding: “Apart from the stamp of approval, there was zero added value to me as a human being.” Mass murder has become easier in every sense by increasing moral and emotional distance and reducing accountability.

Democratic oversight and multilateral constraints are necessary rather than leaving decisions to entrepreneurs and defense departments. As bombs rain down on Iran, states meet in Geneva Address lethal autonomous weapons systems; The draft text they considered would be a strong foundation for the treaty that is so desperately needed. Most governments want clear guidance on military use of AI. It is the biggest players who protest – even though they are the least in the room. The pace of AI-powered warfare means caution can feel like handing over control to opponents. Yet as technical staff and military officers themselves are realizing, the dangers of uncontrolled expansion are far greater.

  • Do you have any opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email for consideration for publication in our Letters section, please click here.

Related Articles

Leave a Comment