Last week, Anthropic CEO Dario Amodei publicly drew a line in the sand with the US military, emphasis Its AI models cannot be used for mass surveillance of Americans or lethal autonomous weapons.
The move enraged Pentagon officials. Defense Secretary Pete Hegseth came out in full forceAccusing Anthropic of trying to “seize veto power over operational decisions of the United States military” and banning the company from doing any business with any US government entity, “effective immediately”.
President Donald Trump on Friday ordered agencies to “immediately cease” the use of Anthropic’s technology, while claiming the tool would Phased out of all government functions In the next six months.
But given the government’s extensive use of the company’s chatbot cloud during its deadly attack in Iran, it’s clearly having trouble doing without it. As Washington Post reportsThe US military has been using Palantir’s Maven smart system extensively in the conflict, which has integrated Anthropic’s cloud chatbot since 2024.
last week, wall street journal first reported Hours after the White House announced sanctions on the Pentagon’s use of the cloud to select attack targets in Iran.
according to WaPoAccording to sources, the system spits out precise location coordinates for missile strikes and prioritizes them based on importance. Maven was also used during the US military invasion of Venezuela and the kidnapping of its President Nicolas Maduro.
Navy Admiral Liam Hulin reported that Center Command is making “heavy use” of the Maven system. WaPo.
Military commanders told the newspaper that the military would continue to use anthropic technology, even if the president orders them not to, until a viable replacement emerges.
“Whether his ethics are right or wrong or whatever, we will not let (Anthropic CEO Dario Amodei’s) decision-making cost a single American life,” one source said. WaPo.
It remains to be seen whether OpenAI will step in to fill Anthropic’s shoes. Following Amodei’s disagreements with the Pentagon, CEO Sam Altman saw an opportunity to attack Last week and signed a contract with the Department of Defense – a move that sparked a huge and ongoing PR crisis and led to ChatGPIT’s defounding.
Whatever the chatbot of choice for military commanders, the large-scale use of AI in warfare has left researchers wondering. For one, even the most sophisticated chatbots still struggle with the basics and suffer from mass hallucinations. This can have immense implications when it comes to matters of life and death.
So far, aggressive people have been killed in Iran hundreds of Iranian citizensas well as six american soldiers.
“The key paradigm shift is that AI enables the U.S. military to develop targeting packages at machine speed rather than human speed,” said Paul Scharre, executive vice president of the Center for a New American Security. WP.
But “AI gets it wrong,” he said. “When life and death are at stake we need humans to check the output of generative AI.”
More on Anthropic: Sam Altman is realizing he made a huge mistake
