
Why Anthropic is accusing Chinese AI companies of stealing its data
Anthropic accuses Chinese AI firms of large-scale model distillation from Claude, raising national security concerns and intensifying the US-China AI race
The race to control the artificial intelligence space has heated up with US company Anthropic, in a blog post on Monday (February 23), accusing three Chinese AI companies of stealing data from its chatbot Claude to improve their own models.
What's Anthropic saying?
Anthropic, which is just launched a Bengaluru office, has accused Chinese AI firms DeepSeek, Moonshot AI and MiniMax of carrying out what it calls “industrial-scale distillation attacks” on its chatbot Claude.
Also Read: Claude AI maker Anthropic launches Bengaluru office, bets big on India market
According to Anthropic, these companies created around 24,000 fraudulent accounts and generated more than 16 million exchanges with Claude. The goal, it claims, was to extract high-quality responses in areas such as reasoning, coding, and tool use to improve their own AI models.
The Chinese companies did not immediately respond to the allegations.
What is distillation?
Distillation in AI refers to training a smaller or less capable model using the outputs of a more advanced model. In simple terms, it is like a student learning by studying a teacher’s solved answers.
Also Read: 'America's Anthropic has hit our brand,' says Karnataka's own Anthropic
Anthropic says distillation is a legitimate technique when done internally or with permission. However, it argues that using it to copy another company’s model without consent violates terms of service and regional access rules.
Critics referenced past lawsuits against Anthropic over training data practices, arguing that accusations of copying can appear hypocritical in an industry built on vast datasets.
The company claims the Chinese firms used Claude’s responses to fine-tune their own systems, effectively replicating its capabilities.
Is it a national security issue?
Anthropic warns that models built through illicit distillation may not retain the safety safeguards embedded in the original system. That could mean fewer protections against misuse, including the generation of harmful or policy-sensitive content.
Also Read: Elon Musk says Anthropic’s Claude AI is ‘misanthropic and evil’, 'hates men'
The company argues that if such distilled models are open-sourced, the risks multiply because the technology can spread globally without control. It is also using these allegations to support tighter US export controls on advanced chips, claiming that restricting chip access can limit both large-scale model training and improper distillation efforts.
US-China AI rivalry at play?
The accusations come amid growing tension between American and Chinese AI companies. Earlier this month, OpenAI warned US lawmakers that DeepSeek was attempting to replicate leading American AI systems, including ChatGPT.
At the same time, reports have suggested that DeepSeek may have used advanced Nvidia Blackwell chips despite US export restrictions. The escalating claims highlight how artificial intelligence is increasingly being framed as a matter of national security in Washington.
Also Read: Infosys stock rises over 3 pc after strategic tie-up with Anthropic
The controversy also arrives as DeepSeek prepares to release a successor to its R1 model, intensifying speculation that Chinese AI systems could soon rival or surpass leading US models.
Pot calling kettle black?
Online reaction has been mixed and mostly critical. Some users pointed out that the broader AI industry relies heavily on large-scale data scraping.
Critics referenced past lawsuits against Anthropic over training data practices, arguing that accusations of copying can appear hypocritical in an industry built on vast datasets.
Also Read: Who is Mrinank Sharma? Why did he quit Anthropic?
Tech billionaire Elon Musk also weighed in, accusing Anthropic of having trained its own models on scraped data in the past. Musk’s own AI venture has faced similar scrutiny, but he argued that Anthropic was being inconsistent in its criticism.
Economics at work?
A key factor in the dispute is the difference between open-source and proprietary models. Many Chinese AI systems release their training weights publicly, allowing developers worldwide to build on them. By contrast, American models such as Claude and ChatGPT are closed and commercial.
If Chinese open-source models achieve performance comparable to leading US systems, it could reduce the commercial advantage of American firms. That possibility adds economic stakes to an already-tense technological rivalry.
Also Read: Nifty IT index crashes as Anthropic AI tool sparks automation fears
In short, Anthropic’s allegations are not just about stolen data. They reflect a deepening AI cold war, where innovation, intellectual property, national security, and global influence are increasingly intertwined.

