OpenAI and Google employees support Anthropic’s lawsuit against the Pentagon

by
0 comments
OpenAI and Google employees support Anthropic's lawsuit against the Pentagon

On Monday, Anthropic filed its lawsuit against the Defense Department over being designated as a supply chain risk. A few hours later, OpenAI and about 40 Google employees – including Jeff Dean, Google’s chief scientist and Gemini lead – An amicus brief filed in support of Anthropic’s lawsuitThe Trump administration’s decision and their concerns over the risks and implications of the technology were detailed.

The news for Anthropic comes after a dramatic few weeks in which the Trump administration labeled the company a supply chain risk — a designation typically reserved for foreign companies that the government deems to be a potential risk to national security in some way — after Anthropic stood firm on two red lines regarding acceptable use cases for the military of its technology: domestic mass surveillance and fully autonomous weapons (or AI systems with the power to kill without any human involvement). Negotiations broke down after a public outcry and other AI companies stepped in to sign contracts allowing “any lawful use” of their technology.

The supply chain risk designation not only prevents Anthropic from working on military contracts, but it also blacklists other companies if they use Anthropic products in their scope of work for the Pentagon, and forces them to ditch the cloud if they want to keep their lucrative contracts. As the first models are approved for classified intelligence, however, Anthropic’s tools are already deeply integrated into the Pentagon’s work — so much so that just hours after Defense Secretary Pete Hegseth announced the designation, the U.S. military reportedly used the cloud in the operation that killed Iran’s leader Ayatollah Ali Khamenei.

The amicus brief seeks to say that Anthropic’s supply chain risk designation is “unfair retaliation that harms the public interest” and that the concerns behind Anthropic’s red lines “are real and require a response.” It also suggests Anthropic’s two red lines are worth revisiting, stating that “Massive domestic surveillance powered by AI poses profound risks to democratic governance – even in responsible hands” and “Fully autonomous lethal weapons systems present risks that must be addressed.”

The group behind the amicus brief described itself as “engineers, researchers, scientists, and other professionals working in U.S. frontier artificial intelligence laboratories.”

“We build, train, and study large-scale AI systems that serve a wide range of users and deployments, including the resulting domains of national security, law enforcement, and military operations,” the group wrote. “We present this description not as spokespersons for any one company, but as professionals in our individual capacities who have first-hand knowledge of what these systems can and cannot do, and what is at stake when their deployment transcends the legal and ethical frameworks designed to regulate them.”

On the domestic mass surveillance front, the group said that although data on US citizens is everywhere in the form of surveillance cameras, geolocation data, social media posts, financial transactions and more, “what does not yet exist is an AI layer that transforms this vast, fragmented data landscape into a unified, real-time surveillance mechanism.” Right now, he writes, these data streams are silos, but if AI is used to connect them, it could “tie together facial recognition data with location history, transaction records, social graphs, and the behavioral patterns of billions of people.”

When it comes to particularly lethal autonomous weapons, the group said they can be unreliable in new or ambiguous situations that do not align with the environment in which they were trained — meaning they “cannot be trusted to identify a target with absolute accuracy, and they are unable to make the subtle contextual tradeoffs between achieving an objective and accounting for collateral effects that a human can.” Additionally, the group wrote, the hallucinogenic potential of lethal autonomous weapons systems means it is important for humans to be involved in the decision-making process “before launching lethal munitions at a human target” – especially since the system’s logic chain is often not available to operators and is also unclear to the system’s developers.

The group behind the amicus brief wrote, “We are diverse in our politics and philosophies, but we are united in the conviction that today’s frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight, and those risks require some form of guardrails, whether through technical safeguards or through use restrictions.”

Related Articles

Leave a Comment