
Anthropic, the AI company behind Claude, is alleging that three Chinese AI firms attempted to gain an unfair advantage by misusing its AI model. The company claims these firms created over 24,000 fraudulent accounts to siphon data from Claude, raising concerns about intellectual property and the potential for misuse of AI technology. This incident highlights the growing tension surrounding AI development and the need for stricter regulations and safeguards.
According to Anthropic, these companies generated over 16 million interactions with Claude through more than 24,000 fraudulent accounts [1]. This activity violated Anthropic's terms of service and regional access restrictions [3]. The scale of the alleged data extraction raises concerns about the security and integrity of AI models.
MiniMax targeted agentic coding, tool use, and orchestration [3]. Anthropic detected the campaign while it was still active — before MiniMax released the model it was training [3]. This suggests a proactive approach from Anthropic in monitoring and identifying suspicious activity on its platform.
Anthropic also argues that this incident reinforces the need for export controls on advanced chips [1]. They believe that restricting access to these chips would limit both direct model training and the scale of illicit distillation attempts. The company suggests that the scale of extraction performed by DeepSeek, MiniMax, and Moonshot “requires access to advanced chips" [1].
Anthropic has begun to roll out a new security feature for Claude Code that can scan a user's software codebase for vulnerabilities and suggest patches [4]. Claude Code Security is designed to counter AI-enabled attacks by giving defenders an advantage and improving the security baseline [4]. This is in direct response to potential misuse of AI, even if unintentional, by actors who wish to cause harm.
Anthropic alleges that DeepSeek, MiniMax, and Moonshot created over 24,000 fake accounts to extract data from its Claude AI model to train their own AI systems. These firms reportedly engaged in over 16 million interactions with Claude, violating Anthropic's terms of service and regional access restrictions. Anthropic claims this illicit activity targeted Claude's advanced capabilities like agentic reasoning, tool use, and coding.
Anthropic believes that restricting access to advanced chips would limit both direct AI model training and the scale of illicit data extraction attempts like the one they allege was carried out by Chinese firms. They argue that the scale of data extraction performed by DeepSeek, MiniMax, and Moonshot requires access to advanced chips. Export controls would make it harder for malicious actors to train AI models using stolen data.
Anthropic emphasizes the potential for misuse of AI technology, particularly by authoritarian governments. They point to the risk of frontier AI being deployed for offensive cyber operations, disinformation campaigns, and mass surveillance. Anthropic also notes that these risks are multiplied if the models are open-sourced, highlighting the importance of responsible AI development and deployment.
Anthropic is calling for coordinated action from the AI industry, cloud providers, and policymakers to address data extraction and misuse. They are investing in technologies that make distillation attacks harder to execute and easier to identify. Anthropic has also begun to roll out a new security feature for Claude Code that can scan a user's software codebase for vulnerabilities.
More insights on trending topics and technology







