Back to Articles
AI
|3 min read|

Anthropic Accuses DeepSeek of Using Claude to Train Its AI

Anthropic Accuses DeepSeek of Using Claude to Train Its AI
Trending Society

AI Overview

  • Anthropic accuses DeepSeek, MiniMax, and Moonshot of using Claude to train their own AI models…
  • The Chinese firms allegedly created approximately 24,000 fake accounts and engaged in over 16…
  • Anthropic argues this activity reinforces the need for export controls on advanced chips.
  • The company is calling for coordinated action from the AI industry, cloud providers, and…

Anthropic, the AI company behind Claude, is alleging that three Chinese AI firms attempted to gain an unfair advantage by misusing its AI model. The company claims these firms created over 24,000 fraudulent accounts to siphon data from Claude, raising concerns about intellectual property and the potential for misuse of AI technology. This incident highlights the growing tension surrounding AI development and the need for stricter regulations and safeguards.

Illicit Data Extraction

Anthropic has accused three Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—of improperly using its Claude AI model to train their own systems [1]. The accusation centers around the practice of "distillation," where a smaller AI model is trained using the output of a larger, more advanced model. While distillation can be a legitimate training method, Anthropic claims it was used illicitly in this case.

According to Anthropic, these companies generated over 16 million interactions with Claude through more than 24,000 fraudulent accounts [1]. This activity violated Anthropic's terms of service and regional access restrictions [3]. The scale of the alleged data extraction raises concerns about the security and integrity of AI models.

Targeting Advanced Capabilities

Anthropic claims that DeepSeek, MiniMax, and Moonshot specifically targeted the advanced capabilities of Claude, such as agentic reasoning, tool use, and coding [1]. By focusing on these areas, the companies aimed to rapidly improve their own AI models. DeepSeek allegedly targeted Claude’s reasoning capabilities, while generating ‘censorship-safe alternatives to politically sensitive questions’ [3].

MiniMax targeted agentic coding, tool use, and orchestration [3]. Anthropic detected the campaign while it was still active — before MiniMax released the model it was training [3]. This suggests a proactive approach from Anthropic in monitoring and identifying suspicious activity on its platform.

The Call for Action and Chip Export Controls

Anthropic is calling for a coordinated response from the AI industry, cloud providers, and policymakers to address these issues [1]. The company emphasizes the need for stronger defenses against data extraction and misuse. This includes investing in technologies that make distillation attacks harder to execute and easier to identify.

Anthropic also argues that this incident reinforces the need for export controls on advanced chips [1]. They believe that restricting access to these chips would limit both direct model training and the scale of illicit distillation attempts. The company suggests that the scale of extraction performed by DeepSeek, MiniMax, and Moonshot “requires access to advanced chips" [1].

Potential Security Risks

The incident raises concerns about the potential for misuse of AI technology, particularly by authoritarian governments. Anthropic pointed to authoritarian governments deploying frontier AI for things like “offensive cyber operations, disinformation campaigns, and mass surveillance,” a risk that is multiplied if those models are open-sourced [1]. This highlights the importance of responsible AI development and deployment.

Anthropic has begun to roll out a new security feature for Claude Code that can scan a user's software codebase for vulnerabilities and suggest patches [4]. Claude Code Security is designed to counter AI-enabled attacks by giving defenders an advantage and improving the security baseline [4]. This is in direct response to potential misuse of AI, even if unintentional, by actors who wish to cause harm.

FAQ

Anthropic alleges that DeepSeek, MiniMax, and Moonshot created over 24,000 fake accounts to extract data from its Claude AI model to train their own AI systems. These firms reportedly engaged in over 16 million interactions with Claude, violating Anthropic's terms of service and regional access restrictions. Anthropic claims this illicit activity targeted Claude's advanced capabilities like agentic reasoning, tool use, and coding.

Anthropic believes that restricting access to advanced chips would limit both direct AI model training and the scale of illicit data extraction attempts like the one they allege was carried out by Chinese firms. They argue that the scale of data extraction performed by DeepSeek, MiniMax, and Moonshot requires access to advanced chips. Export controls would make it harder for malicious actors to train AI models using stolen data.

Anthropic emphasizes the potential for misuse of AI technology, particularly by authoritarian governments. They point to the risk of frontier AI being deployed for offensive cyber operations, disinformation campaigns, and mass surveillance. Anthropic also notes that these risks are multiplied if the models are open-sourced, highlighting the importance of responsible AI development and deployment.

Anthropic is calling for coordinated action from the AI industry, cloud providers, and policymakers to address data extraction and misuse. They are investing in technologies that make distillation attacks harder to execute and easier to identify. Anthropic has also begun to roll out a new security feature for Claude Code that can scan a user's software codebase for vulnerabilities.

Related Articles

More insights on trending topics and technology

Newsletter

Stay informed without the noise.

Daily AI updates for builders. No clickbait. Just what matters.