The ongoing attacks on the Islamic Republic of Iran, launched by a joint coalition of American and Israeli military forces, have claimed 555 Iranians liveWhich also includes 165 deaths due to one attack Primary school in southern Iran.
In form of wall street journal informed As the attacks unfolded, military strike forces had a hand in selecting their targets with Anthropic’s cloud chatbot.
According to the paper, Anthropic’s big language model, the cloud, is the key “AI tool” used by US Central Command in the Middle East. Its functions include assessing intelligence, conducting simulated war games and even identifying military targets – in short, helping military leaders plan attacks that have already killed hundreds of people.
Anthropic’s role in the devastating attacks may come as news to those who thought the company’s ethical boundaries prevented it from doing any military work. The company and its CEO, Dario Amodei, are locked in a messy conflict with the Trump administration over two particular ethical limits: the use of the cloud to monitor American citizens, and for fully autonomous, lethal weapons.
It appears that using the cloud to select targets is not against the bot’s ethical guardrails.
This is surprising, given that Anthropic spent the latter half of February embroiled in a conflict with the Pentagon over its use of the cloud.
Last week, the Pentagon – which currently uses the cloud in all of its classified systems – set a deadline for Anthropic to drop the dual redlines of surveillance and fully autonomous weaponry. Anthropic allowed that deadline to pass without interruption, establishing what many interpreted as a principled stance against the Trump administration’s militarism.
Still as a Pulitzer Prize-winning national security journalist spencer ackerman sawIt is important to note what ethical lines were ignored when Anthropic first made its agreement with the military.
Ackerman wrote, “Amodei, it is extremely clear, does not register the creation of the surveillance panopticon of foreigners as a problem.” “The time to worry about everything related to Amodei was before he signed the contract that Amodei didn’t want to give up. America is in such a steep decline that we can’t even make Oppenheimers like we used to.”
“When you take Doctor Doom’s money to provide a lathe for manufacturing components for anthropomorphic robots,” Ackerman said tartly, “don’t you understand that he’s going to build Doombots?”
More on Cloud: Anthropic abandons its huge security pledge that was allegedly the company’s entire purpose
