Where OpenAI’s technology may appear in Iran

by ai-intensify
0 comments
Where OpenAI's technology may appear in Iran

It is unclear what OpenAI’s motivations are. It is not the first tech giant to embrace military contracts, having once vowed never to enter into them, but the speed of the pivot was remarkable. Maybe it’s just about the money; OpenAI is spending a lot on AI training and is looking for more revenue (including sources) Advertisement). Or perhaps Altman actually believes the ideological framework he often invokes: that liberal democracies (and their militaries) should have access to the most powerful AI to compete with China.

The more consequential question is what happens next. OpenAI has decided that it is comfortable working in the dirty center of war, as the US escalates its attacks against Iran (AI is playing a bigger role than ever before). So where might OpenAI’s technology actually appear in this battle? And what applications will its customers (and employees) tolerate?

targets and attacks

Although its Pentagon agreement is in place, it’s unclear when OpenAI’s technology will be ready for classified environments, as it must be integrated with other tools for military use (Elon Musk’s XAI, which recently signed its deal with the Pentagon, is expected to go through the same process with its AI model Grok). But there is pressure to do it quickly because of the controversy surrounding the technology in use to date: After Anthropic refused to allow its AI to be used for “any lawful use”, President Trump ordered the military to stop using it, and Anthropic was designated as a supply chain risk by the Pentagon. (Anthropic is fighting against the designation in court.)

If the Iran conflict is still going on by the time OpenAI’s technology gets into the system, what can it be used for? My recent conversation with a defense official suggests it might look something like this: A human analyst could feed a list of potential targets into an AI model and ask it to analyze the information and prioritize which ones to attack first. The model can take into account logistics information, such as where particular aircraft or supplies are located. It can analyze many different inputs such as text, images, and videos.

A human will then be responsible for manually checking these outputs, the official said. But this raises an obvious question: If someone is actually double-checking the AI’s output, how is that sharpening targeting and strike decisions?

For years the Army has been using another AI system called Maven, which can handle things like automatically analyzing drone footage to identify potential targets. It’s likely that OpenAI’s models, like Anthropic’s cloud, will provide a conversational interface on top of it, allowing users to interpret the intelligence and recommendations on which targets to attack first.

It’s hard to overstate how new this is: AI has long done analysis for the military, drawing insights from oceans of data. But generic AI’s advice about what action to take in the region is being used seriously for the first time in Iran.

drone defense

In late 2024, OpenAI announced a partnership with Anduril, which makes both drone and counter-drone technology for the military. The agreement states that OpenAI will work with Anduril to conduct time-sensitive analysis of drones attacking US forces and help them shoot them down. An OpenAI spokesperson told me at the time that it didn’t violate company policies, which prohibit “systems designed to harm others,” because the technology was being used to target drones, not people.

Related Articles

Leave a Comment