We don’t need to have unsupervised killer robots

by
0 comments
We don't need to have unsupervised killer robots

It’s the day of the Pentagon’s impending ultimatum to Anthropic: Allow the US military uncontrolled access to its technology, including mass surveillance and fully autonomous lethal weapons, or potentially be designated a “supply chain risk” and potentially lose hundreds of billions of dollars in contracts. Amid escalating public statements and threats, tech workers across the industry are looking at their companies’ government and military contracts, wondering what kind of future they are helping to build.

While the Defense Department has spent weeks negotiating with Anthropic over removing its guardrails, including allowing the US military to use Anthropic’s AI kill targets without human oversight, OpenAI and XAI have done so. Allegedly Such terms have already been agreed, although OpenAI Allegedly Anthropic’s similar agreements are attempting to adopt similar red lines. The overall situation has left employees of some companies with defense contracts feeling betrayed. “When I joined the tech industry, I thought technology was about making people’s lives easier,” said an Amazon Web Services employee. The Verge“But now it seems like it’s all about making it easier to monitor, deport, and kill people.”

in conversation with The VergeCurrent and former employees of OpenAI, xAI, Amazon, Microsoft, and Google expressed similar sentiments about the changing ethical landscape of their companies. There are organized groups representing 700,000 tech workers at Amazon, Google, Microsoft and other companies. signed a letter Demands were made that the companies reject the Pentagon’s demands. But many saw little possibility of their employers – whether directly involved in the conflict or not – questioning or pushing back against the government.

“From their perspective, they would rather make money and not talk about it,” said a Microsoft software engineer.

So far, Anthropic is standing by its word. Anthropic CEO Dario Amodei made a proposal statement Thursday that the Pentagon’s threats do not change our position: We cannot in good conscience accede to their request. But he has said that he was never opposed to lethal autonomous weapons in the future, just that the technology was not reliable enough “today”. Amodei also offered to partner with the DoD on “R&D” to improve the reliability of these systems, but they did not accept the offer, he wrote in the statement.

However, over the past few years, major tech companies have loosened their rules or changed their mission statements to rake in lucrative government or military contracts. In 2024, OpenAI removed the ban on “military and war” use cases from its terms of service; After that, it signed a deal with autonomous weapons maker Anduril and then its DoD contract, and just this week, Anthropic. Changed its oft-cited responsible scaling policyAbandoned its long-term security pledge to ensure it remains competitive in the AI ​​race. Big tech players like Amazon, Google, and Microsoft have also allowed defense and intelligence agencies to use their AI products, including agreeing to work with ICE despite growing public and employee opposition.

Over the years, resistance by tech employees to partnerships and deals deemed harmful to society at large has sometimes led to major changes. For example, in 2018, thousands of Google employees successfully pressured the company to end their jobs.project mavenPartnership with the Pentagon and Microsoft employees presented anti-ICE leadership petition signed by about 500 Microsoft employee, although Microsoft still works with Agency. In 2020, following the murder of George Floyd, tech companies made public statements about financial commitments in support of the Black Lives Matter movement. But in recent months, the industry has seen a very different reality: a culture of fear and silence, especially amid the Trump administration and cooperation with ICE, tech workers recently reported. The Verge.

The companies are following in the footsteps of longtime surveillance and military technology partnerships that have become more aggressive. This includes Palantir, founded by Peter Thiel, whose CEO Alex Karp recently took over said “Palantir is here to disrupt and make the institutions we partner with the best in the world, and, when necessary, to intimidate enemies and, on occasion, kill them. And we hope you’re on board,” he told shareholders. (Protect Democracy, a nonprofit, recently posted open letter Call for congressional oversight of Defense Department demands for unrestricted use of AI. )

OpenAI, Google, Microsoft, xAI and Amazon did not immediately respond to requests for comment.

A former XAI employee told The Verge“Everyone is really working on killer robots at this point,” he believes, adding that everyone will follow in the footsteps of Palantir, Anduril and XAI, because the government’s feeling is that if a company doesn’t agree, it’s “kind of against the benefit of the country.” “There’s a lot of emphasis on working with the military and the trend is that it’s cool to do that… if you do that, you’re a patriot,” he said.

A Google employee called the situation “a display of dominance by Hegseth that is disgusting”. He added, “Again and again AI is giving us choices about who we want to be and what kind of society and future we want. And they are increasingly, and indeed, coming to power with the least thoughtful and least principled leaders we can imagine. I can only thank Anthropic for insisting on the civilized path and using their leverage – that they are indispensable – to chart a course towards a humane world and a humane future.”

AWS employee told The Verge that “the boundaries have certainly been crossed in terms of customers, big techs are willing to go to court” and that “the implications of new lucrative deals are being deliberately whitewashed.” He recalled recently receiving an email from an AWS executive that touted a more than $580 million contract with the U.S. Air Force, among other partnerships, as a sign of Amazon’s AI successes, with no acknowledgment of the broad scope or harm involved.

“If governments are bent on pursuing these kinds of technologies, they have to make them themselves and be accountable for those decisions,” he said.

This erosion can also extend to internal culture – normalizing the idea that companies must always be on watch. The AWS employee said she and her coworkers are monitored for how much they are using AI for their jobs, how often they are working from the office, and more. “I see myself and my colleagues becoming more desensitized to how we police ourselves at work, and I worry that this means we are obeying, complying, and giving up too much already,” she said.

An OpenAI employee said that the general sentiment within the AI ​​industry over the past few weeks has “re-opened the door to greater discussion about the values ​​and future of the technology.” The staffer said the Pentagon-anthropic situation, recent ICE headlines and the rapid progress of AI have been some of the main factors opening up those discussions internally.

Still, people who are immigrants or in more vulnerable situations are more afraid to speak out, the OpenAI employee said.

It seems like it’s in a position where it can say no and still survive, said Anthropic, a former XAI employee. Its focus on enterprise rather than consumer AI business may make it more sustainable even without government contracts, which would give it some leverage. A Microsoft software engineer said of Anthropic generally, “I was surprised to see them standing on some kind of principle. I don’t know how long that will last.”

“Will this do?” This is the question on everyone’s lips. The Pentagon has already done Allegedly Asked two major defense contractors, Boeing and Lockheed Martin, to provide information about their reliance on Anthropic’s cloud, as it moves to potentially designate Anthropic as a “supply chain risk”, a classification typically reserved for threats to national security and rarely, if ever, assigned to a US company. this also Allegedly Invoking the Defense Production Act may be considered to attempt to force Anthropic to comply with its request.

Like any other AI company, if Anthropic folds, the Microsoft employee said, it’s very unlikely that it or others will turn back to killer robots and surveillance. “Once you’re in contact with the Department of Defense or whatever we call it now… I think it’s hard for them to actually have the oversight that they claim to have. It would be beneficial to basically allow yourself to do the work that makes the most money.”

In Microsoft’s own case, he said he did not expect the company to follow any kind of ethical principles. The company has worked extensively with the Israeli Defense Forces, including mass surveillance of Palestinians and dissidents, despite employee opposition. (said this finished some parts Last year’s partnership.)

Another Microsoft employee told The Verge Although “Microsoft has a ‘responsible AI’ commitment, … they are currently attempting to play both sides for profit rather than making a meaningful commitment to responsible AI.”

But this is nothing new, said an AI startup employee. In his view, the boundaries are often “vague, especially within AI,” regarding what kinds of things companies are willing to do to power their technology. “For as long as AI has existed, a lot of it has been going on beneath the surface.”

The AWS employee emphasized that “we need cross-tech solidarity and a coherent, worker-led vision for AI now more than ever.”

“The security measures that Anthropic is trying to implement are no mass surveillance of Americans and no fully autonomous weapons, which means if a machine is going to kill someone they want a human in the loop,” he said. “Even if this technology were perfect – which it is not – I think most Americans would not want machines that kill people without human oversight in an America that has become an AI-powered mass surveillance state.”

Follow topics and authors To see more like this in your personalized homepage feed and get email updates from this story.


Related Articles

Leave a Comment