Don’t bet on whether the Pentagon – or Anthropic – is acting in the public interest. Nathan E Sanders and Bruce Schneier

by
0 comments
Don't bet on whether the Pentagon – or Anthropic – is acting in the public interest. Nathan E Sanders and Bruce Schneier

OpenAI is in And Anthropic has emerged as a supplier of AI technology to the US Department of Defense. The news caps a week of crackdown by top US government officials toward some of the richest giants of the big tech industry and the looming threat of existential threats posed by a new technology so powerful the Pentagon claims it is essential to national security. the issue is anthropic Urge that the US Department of Defense (DOD) could not use its model to facilitate “mass surveillance” or “overwhelmingly” “Autonomous Weapons,” Defense Secretary Pete Hegseth provides derided Like “woke up”.

It all came to a head on Friday evening when Donald Trump issued an order for federal government agencies to stop using anthropic models. in a few hoursOpenAI raided and potentially seized millions of dollars government bond By striking an agreement with the administration to provide classified government systems with AI.

Regardless of historicity, this is probably the best outcome for Anthropic – and the Pentagon. Both are, and should be, free to buy and sell what they want in our free-market economy, subject to a long-standing federal system. Rule On contracts, acquisitions and blacklisting. The only factor here that is inconsistent is the Pentagon’s retaliatory threats.

AI models are increasingly being marketed. The performance of the top-tier offerings is almost identical, and there is little to differentiate one from the other. The latest models from Anthropic, OpenAI, and Google, in particular, outperform each other with minor jumps in quality every few months. A provider has the best models Favorite Users rated the model second, or third, or 10th best only six times out of 10, a virtual tie.

Branding matters a lot in this kind of market. Anthropic and its CEO, Dario Amodei, are positioning themselves as an ethical and trustworthy AI provider. It has market value for both consumers and enterprise customers. In replacing Anthropic in the government contract, OpenAI’s CEO, Sam Altman, swore off Anthropic was just criticized for somehow maintaining the same security principles. How this is possible is entirely unclear, given the rhetoric of Hegseth and Trump, but it is sure to further politicize OpenAI and its products in the minds of consumers and corporate buyers.

Posturing publicly against the Pentagon and as a hero The cost of lost contracts to Anthropic is probably quite worth it for civil libertarians, and associating itself with those same contracts could be a trap for OpenAI. In the meantime, the Pentagon has plenty of options. Even though no big tech companies were willing to supply it with AI, the department has already deployed dozens open weight Models – whose parameters are public and often licensed permissively for government use.

We can admire Amodei’s stance, but, of course, it is mainly posturing. Anthropic knew what they were doing when Defense department partnership agreed For $200 million last year. and when they signed a partnership With surveillance company Palantir in 2024.

read amodei statement About the issue. or his january essay On AI and risk, where he repeatedly uses the words “democracy” and “autocracy”, while avoiding how cooperation with US federal agencies should be viewed at this time. Amodei has bought in The idea of ​​using AI to “achieve strong military superiority” came from the world’s democracies in response to threats from autocratic regimes. It is an intoxicating sight. But it is a vision that also recognizes that the world’s nominal democracies are committed to a common vision of the common good, the pursuit of peace, and democratic control.

Nevertheless, the Department of Defense can also reasonably demand that the AI ​​products it purchases meet its needs. The Pentagon is no ordinary customer; It buys products that kill people all the time. Tanks, artillery pieces and grenades are not products with moral guard rails. The Pentagon’s needs include weapons of appropriate lethal force, and if possible, those weapons in a stable release. objectionablepath of Increasing automation.

So, on the surface, this dispute is a normal market transaction. The Pentagon has specific requirements for the products it uses. Companies can decide whether to carry them out and at what price. And then the Pentagon can decide who gets those products from. It seems like a normal day in the procurement office.

But, of course, this is the Trump administration, so it doesn’t stop there. Hegseth doesn’t just threaten Anthropic with losing government contracts. The administration has, at least until inevitable lawsuits force the courts to sort things out. named company As a “supply-chain risk to national security”, this designation previously applied only to foreign companies. This prevents not only government agencies, but also their own contractors and suppliers from contracting with Anthropic.

The government has also threatened to enforce it inconsistently Defense Production ActThat could force Anthropic to remove contractual provisions the department previously agreed to, or perhaps fundamentally modify its AI models to remove built-in safety guardrails. Government demands, Anthropic’s response and the legal context in which they operate will undoubtedly change in the coming weeks.

But, the worrying thing is that autonomous weapons systems are here to stay. Primitive pit traps evolved into mechanical bear traps. The world is still debating the ethical use of land mines and dealing with their legacy. America Phalanx CIWS A 1980s shipboard anti-missile system with a fully autonomous, radar-guided cannon. Today’s military drones can search for, identify, and attack targets without direct human intervention. AI will be used for military purposes, just as every other technology invented by our species is used.

The lesson here should not be that one company is more ethical than another in our greedy capitalist system, or that one corporate hero can stand in the way of government adoption of AI as technologies of war, or surveillance, or repression. Unfortunately, we do not live in a world where such barriers are permanent or particularly strong.

Instead, the lesson is about the importance of democratic structures in America and the urgent need for their renewal. If the Department of Defense is seeking the use of AI for mass surveillance or autonomous warfare, which we, the public, consider unacceptable, then let us know that we need to pass new legal restrictions on those military activities. If we are uncomfortable with the pressure being exerted by the government to dictate how and when companies succumb to unsafe applications of their products, we must strengthen the legal protections around government procurement.

The Pentagon must maximize its warfighting capabilities under the law. And private companies like Anthropic must strive to gain consumer and buyer trust. But we should not rest on our laurels thinking that doing so is in the public interest.

  • Nathan E. Sanders is a data scientist affiliated with the Berkman Klein Center at Harvard University and co-author with Bruce Schneier of the book Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship. Bruce Schneier is a security technologist who teaches at the Harvard Kennedy School at Harvard University.

Related Articles

Leave a Comment