
Google is accusing others of stealing its AI models using "distillation attacks," a technique to reverse engineer the underlying model via excessive queries. The accusation lands with a thud given Google's own history of scraping data to train its AI, highlighting a growing tension around intellectual property in the AI space.
Google characterized these actions as "intellectual property theft" and a violation of its terms of service. The company stated that the attacks targeted Gemini’s ability to reason across multiple languages.
The irony isn't lost on observers. Google is now claiming foul over similar behavior, suggesting a double standard when it comes to using data for AI development.
"For many AI technologies where LLMs are offered as services, this approach is no longer required; actors can use legitimate API access to attempt to 'clone' select AI model capabilities," Google's report states.
News of Google's troubles comes as Google brings agentic shopping to AI search, letting US shoppers buy items from Etsy and Wayfair in AI Mode in Search as well as the Gemini app. This shows Google's commitment to integrate AI in the e-commerce experience.
Source: futurism.com
Disclosure: This article is for informational purposes only.
AI distillation attacks, also known as model extraction attacks, involve repeatedly querying an AI model to reverse engineer its underlying logic and functionality. Attackers send a massive number of prompts to the model, then analyze the responses to replicate the AI's reasoning abilities. Google characterizes these attacks as intellectual property theft when used against models like Gemini.
Google's accusation of AI copying is controversial because the company itself has a history of scraping vast amounts of data from the internet to train its AI models, often without compensating the original creators. This practice has led to copyright infringement lawsuits and accusations of hypocrisy, as Google now claims foul over similar data usage practices.
AI companies are investing in more robust monitoring and defense mechanisms to detect and mitigate distillation attacks. They are also implementing more stringent terms of service and API usage policies to protect their intellectual property. Stronger legal and technical efforts are expected to protect AI models from unauthorized extraction.
Google's Gemini is a large language model (LLM) that is offered as a service via APIs. Google claims that commercially motivated actors are attempting to clone it using up to 100,000 queries to extract the model’s logic, which Google calls a “distillation attack.”
Protecting intellectual property is key to the monetization strategies of AI companies. As Large Language Models (LLMs) become more powerful, protecting them from unauthorized duplication becomes increasingly difficult, especially when offered as services via APIs. This incident emphasizes the need for stronger intellectual property protection for AI models.
More insights on trending topics and technology







