After years of portraying itself as a security and responsible AI-first model provider, Anthropic’s decision to lower its AI security pledge and loosen its “responsible scaling policy” reflects the economic and political pressures it faces and the need to be flexible to survive in the AI market.
Anthropic’s Chief Science Officer Jared Kaplan said time magazine This week he will follow through on his commitment to never train again in 2023 AI system Unless it is certain that adequate safeguards are in place. According to Kaplan, Anthropic made this change to remain competitive in the turbulent AI market.
Instead, Anthropic will now commit to clearly showing enterprises how its models perform in security tests.
Kaplan’s revelation comes as the AI models provider is seeing significant growth in the use of its cloud models Competition from arch-rival OpenAI.
However, this increase has peaked due to the current fighting Anthropic is fighting the US Department of Defense. Defense Department officials have said in recent days that Anthropic could become a “supply chain risk” because the generic AI vendor does not want its technology used. Mass surveillance of Americans Or in fully autonomous weapons systems. If Anthropic can’t resolve its problems with the Defense Department, it could lose its government contracts and access to commercial partners that do business with the Pentagon.
government pressure
Losing some or all of its lucrative government contracts would be a dramatic blow to Anthropic. This will represent a significant shift in the AI market from at least one vendor taking a firm stand on security to all vendors pursuing innovation, as some AI vendors strongly prioritize the safe use of AI. Meanwhile, regulation of AI technology in the US is almost non-existent.
“Governments domestically, but also in large parts of the world, are not really aggressively regulating this technology,” said Michael Bennett, associate vice chancellor for data and AI strategy at the University of Illinois Chicago. He said many AI vendors have free rein to innovate as quickly as they want. Therefore, Anthropic realizes a risk that, if it fails to innovate, another AI vendor that is not committed to security will take the lead.
“They’re effectively saying, ‘Hey, we need to stay in this race. The government isn’t really helping. Competitors aren’t taking our suggestions here. So, we shouldn’t be testing ourselves at this rate when others aren’t doing the same,'” Bennett said.
Furthermore, the regulatory landscape in the US has shifted from relatively small but significant steps toward AI regulation at the federal level under former President Joe Biden to an emphasis on President Donald Trump’s unbridled innovation and opposition to regulation, highlighted by his December executive order that bars states from passing their own AI laws.
Bennett said that due to the changed regulatory environment, a portion of Anthropic’s enterprise customers will be sympathetic to changes in the vendor’s AI security policy.
“Many customers, or at least some of their customers, who are committed to the spirit of the responsible scaling policy will understand that Anthropic is still holding a line when it comes to the more controversial applications (of its technology),” he said. “The change in policy does not mean that Anthropic has become an unethical actor in the region.”
RSP is Anthropic’s framework that sets measurable levels of AI model capabilities and mandates security protocols, including halting development if standards are violated.
Furthermore, some enterprises do not focus so much on security. Rather they want to make popular use of anthropic cloud code Agent for creating the software, said Jeff Pollard, an analyst at Forrester Research.
“They want to be able to write more code, write better code, write code faster, and the potential safety and security underpinnings would have been a nice thing, but I don’t think they were necessary for a lot of those customers,” Pollard said.
a business decision
But Anthropic’s decision to reduce the emphasis on AI security is still likely to have some consequences.
“I am a little concerned about Anthropic’s move,” said Lily Li, an AI risk, data privacy and cybersecurity lawyer and founder of Metaverse Law, which focuses on privacy, AI and cybersecurity law for the digital economy. “I completely understand why the company is doing what it is in the face of these dual threats, but I am concerned because many people, including myself, support AI models that have strong safeguards in place.”
Lee added, “The more you weaken these public representations of safety, it can really hurt the bottom line in the long run.”
But Anthropic will likely remain a top AI vendor that prioritizes safety and security, Pollard said.
“If we believe that is a core part of Anthropic’s value… if we want companies to have that aspiration in the market, we need Anthropic to survive,” he said.
Bennett said that the weakening of security policy could also lead to the emergence of Anthropic’s next powerful model.
“If they have the kind of momentum that the previous releases show,” he said, referring to significant updates to the cloud model family over the past year. “Then you may see a more powerful next cloud version … which may encourage some of your competitors to do whatever they can to accelerate their growth,” he said.
Moreover, the possibilities of AI are not completely exhausted, Li said. Some states, such as Colorado, are shifting the focus from AI tool developers to users.
Colorado Artificial Intelligence Act It took effect February 1 and regulates the deployment of AI tools to prevent discrimination in housing, employment, health care and finance.
