Anthropic got angry at DeepSeek for copying its AI without permission, which is very ironic when you consider how it created the cloud in the first place

by
0 comments
Anthropic got angry at DeepSeek for copying its AI without permission, which is very ironic when you consider how it created the cloud in the first place

Chance Yeh/Getty Images for HubSpot

Earlier this month, Google caught in public that “commercially motivated” actors were trying to clone its Gemini AI through agents who interrogated the chatbot up to 100,000 times to “extract” the underlying model.

The hypocrisy of Google’s allegations was obvious. For years, the search giant has relied on blindly scouring the Internet for content to train its AI models, without compensating copyright holders — and racking up cases As a result.

Now Anthropic has come into the fray. Unlike Google, the company behind Chatbot Cloud was willing to point the finger at Chinese AI firms DeepSeek, Moonshot and MiniMax, accusing them of “distilling” its AI models. The company claimed in a new blog post The accused companies created more than 24,000 fake accounts that queried the cloud 16 million times, which is “a violation of our terms of service and regional access restrictions.”

Distillation essentially occurs when a smaller “student” model is trained to replicate the performance of a much larger “teacher” model – a complex term essentially denoting the act of copying someone’s homework without explicit permission.

Even ChatGPT creator OpenAI has Accused DeepSeek of disturbing its AI model In a statement earlier this month, it highlighted widespread opposition to Chinese entities reverse-engineering extremely expensive AI software – despite the companies themselves collecting the intellectual property and feeding it to their models with little regard for securing the necessary permissions.

In its blog post, Anthropic alleged dishonesty and accused Chinese companies of “illegally” extracting “the capabilities of the cloud to improve their own models.”

It reads, “Distillation is a widely used and validated training method.” “For example, Frontier AI laboratories routinely distill their own models to create smaller, cheaper versions for their clients.”

“But Distillation can also be used for illicit purposes: Competitors can use it to obtain powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take them to develop them independently,” the blog post reads.

Some of the questions identified by the company included “visualizing and articulating the internal logic behind a complete response and writing it step by step – effectively generating large-scale thought-chain training data” from the cloud.

The allegations come as DeepSeek prepares to release its V4 model, a development that could unsettle the US AI industry. However, according to Anthropic, perhaps the biggest culprit was Shanghai-based company Minimax, which launched an IPO on the Hong Kong Stock Exchange last month. Anthropic says it has identified “over 13 million exchanges” from it.

The company claimed, “When we released a new model during MiniMax’s active campaign, they redirected almost half of their traffic within 24 hours to capture capabilities from our latest system.”

As a result, Anthropic is calling for action “from the AI ​​industry, cloud providers, and policymakers.”

“These campaigns are growing in intensity and sophistication,” Anthropic wrote. “The scope for action is narrow, and the threat extends beyond any one company or sector.”

But given the careless attitude of some of the biggest players in the AI ​​field, it remains to be seen whether politicians will jump into action.

In early 2025, DeepSeek took Silicon Valley by storm after proving that its AI model could be built far more cheaply and efficiently than leading models at the time, resulting in a panic sell-off. Over $1 trillion in valuation is being erased.

Whether anything will be different this time remains to be seen. The tone has changed significantly over the past year, with investors beginning to commit hundreds of billions of dollars to the data centers being built by AI companies.

Perhaps, if DeepSeek can prove once again that training AI models can be done cheaply, then perhaps the onus may be on us to be inspired by their approach instead.

Meanwhile, netizens had little sympathy for Anthropic.

“They robbed robbers,” one Reddit user said. wrote. “The poor billionaire.”

Another user said, “It’s like when zoos accuse you of ‘stealing’ animals they rightfully kidnapped from the wild.” argued.

More on conversation: Google says people are copying its AI without permission, just like it scraped everyone’s data without asking to create its own AI first.

Related Articles

Leave a Comment