What does the US military’s feud with Anthropic mean for AI used in warfare? | AI (Artificial Intelligence)

by
0 comments
What does the US military's feud with Anthropic mean for AI used in warfare? | AI (Artificial Intelligence)

AAnthropic’s ongoing battle with the Defense Department over what security restrictions it can place on its artificial intelligence models has enthralled the tech industry, serving as a test of how AI could be used in warfare and the power of the government to force companies to comply with its demands.

The conversation revolved around Anthropic’s refusal to allow the federal government to use its cloud AI for domestic mass surveillance or autonomous weapons systems, but the dispute also reflects the messy nature that can occur when tech companies’ products are integrated into conflict. The Pentagon this week declared Anthropic a supply chain risk due to its refusal to agree to the government’s terms, while Anthropic has vowed to challenge the designation in court.

The Guardian spoke to Sarah Kreps, professor and director of the Tech Policy Institute at Cornell University, who previously served in the United States Air Force, about how the feud has come about.

you have worked for a while But Problems with “dual-use technology””. What happens when there is consumer technology that is also used for classified or military purposes?

I’ve thought about this a lot because I was in the military and I was on the side of the military that was developing and acquiring new technologies. We were always getting criticism as to why it was taking so much time, and now seeing what’s happening I realized why it was taking so much time.

What you would develop for classified and military contexts is very different from what Anthropic develops when I use the cloud. The challenge for the military is that these technologies are so useful that they cannot wait until military grade versions are available. Because of how valuable these tools are, they need to act quickly, but it’s not surprising that they ran into cultural differences between not only the AI ​​platform and the military, but an AI platform that has tried to build a reputation for being more security conscious.

One The element in this feud is that Anthropic has branded itself a security-forward company, but then he signed it An agreement with the army.

Yes, there’s a way that Anthropic will be surprised to see where it ended up. Part of the challenge is that Anthropic decided a year or two ago that ChatGPT was going to be for individual users and Anthropic was going to try to capture the enterprise market. This means they are trying to do business with organizations rather than selling individual plans.

The puzzle to me is that they were then doing business with the Pentagon and Palantir, which is in the business of using AI for what some would call dubious purposes. So that decision was surprising to me because it was the exact opposite of the brand that Anthropic was trying to build.

Looks like he was anthropic OK with fairly extensive use of its technology, but it was A red line they got from domestic mass surveillance and lethal autonomous weapons.

There are some possibilities. One is that it had something to do with the relationship between Anthropic and people in the Trump administration, which led to a decline in distrust.

Second, there was the situation in Venezuela and then the politics surrounding ICE activities. The question is, what does it really mean to use these technologies legitimately? One person’s definition of legitimate may look very different from another’s.

The Pentagon’s reasoning was, in part, If there is any national defense issue then we should not call Dario Amodei To get approval. It appears there is a real question here about what role private tech companies have in national security decision making.

If you remember the case of the San Bernardino killer’s iPhone, authorities were concerned that it was a bomb situation and they needed Apple in the phone. (in 201On February 6, the FBI demanded Apple create a backdoor to provide access to a mass shooter’s phone. Apple refused on privacy grounds, resulting in the FBI seeking an independent third party to hack the device.).

The difference here with Anthropic’s AI is that once you hand it over to the military, you don’t need Anthropic’s approval to use it as you wish. This is the difference between hardware and software. You can reuse this software and use it in ways that perhaps were not part of the explicit agreement, but you can now justify it on national security grounds. Then Anthropic has lost all its profits because it is in the hands of these national security professionals.

And Anthropic won’t be able to tell what it’s being used for, right?

Yes, absolutely correct. It not only goes into a black box, but also into black ops and classified systems.

I found it interesting this week that it seems like a lot of long-standing questions over the use of AI in the military are coming to the fore. You’ve been following these issues for a long time, what are you thinking about watching this current fight?

When I listened to the CEO of Anthropic speak, he talked about these existential risks and the misuse of AI for bioterrorism. I always thought they were either far away or out of reach. I thought a more general case like this was risky.

People have been anticipating these questions for a long time, too, about autonomous weapons. The challenge is how do you know if there is actually a human in the loop or not. Anthropic had this concern – how do we know that these systems are being used in a completely autonomous way? The US says we’re not going to use AI in a fully autonomous capacity, but it’s not clear what that process looks like to ensure that doesn’t happen. It was taking some time, but I think it was inevitable that we would go in that direction, just as the technology has become more and more sophisticated. Now the fact of being involved in a conflict accelerates those timelines.

We talk a lot about the dangers of AI and these red lines that people step back from, but how is AI already being used in warfare?

You can see how useful this is in a military setting. I did some work on the Intel side and one of the challenges is not lack of content, it’s signal to noise ratio. You have massive amounts of information but it can be really hard to connect the dots, and that’s something AI is very good at. You provide it with a large amount of information and it produces output that helps it identify what the signal is.

If you are looking for pattern recognition, AI is really good at pattern recognition. You can identify what kinds of correlations or features you’re looking for and then it can go out and identify things, for example an Iranian naval ship, based on what you’ve programmed it to identify. This isn’t overly controversial in some ways, because those goals are pretty solid.

In a situation where people become more uncomfortable, for example, the US will carry out counter-terrorism strikes. You have a person on the ground who doesn’t have a lot of identifiable features and so it’s a much more uncertain situation for the AI ​​where you’ll really want to make sure that you’re triple-checking. He may be a combatant, he may be a civilian. This is not a naval ship or a surface-to-air missile, where it is hard to go wrong.

Related Articles

Leave a Comment