
The Google Threat Intelligence Group's report details how hackers are leveraging Gemini, a large language model (LLM), to streamline and improve their operations. This isn't about entirely new attack methods; it's about making existing ones faster and more efficient. The report highlights that hackers are using Gemini for tasks ranging from target profiling and open-source intelligence (OSINT) to generating phishing lures and translating text [1].
The applications are diverse. A China-based actor used Gemini for debugging, research, and technical guidance related to intrusions. Another instance involved a Chinese-linked group creating an expert cybersecurity persona to automate vulnerability analysis and develop targeted test plans [1].
Beyond direct usage, Google identified "model extraction" attempts. This involves attackers with authorized API access sending a barrage of prompts—in one case, over 100,000—to replicate Gemini's behavior and reasoning [1]. The goal is to distill (recreate) the model’s functionality to train a separate, potentially malicious AI. This poses a risk to commercial and intellectual property.
Google states it has taken action by disabling abusive accounts and implementing targeted defenses within Gemini's classifiers. They are also continuously testing and relying on safety guardrails. However, the report suggests this is an ongoing battle, with attackers constantly seeking new ways to exploit the technology.
"The PRC-based threat actor fabricated a scenario, in one case trialing Hexstrike MCP tooling, and directing the model to analyze Remote Code Execution (RCE), WAF bypass techniques, and SQL injection test results against specific US-based targets," Google said in the report [1].
The Google Threat Intelligence Group (GTIG) identified state-backed groups from China, Iran, North Korea, and Russia using Gemini [1]. These groups are employing Gemini for reconnaissance, phishing, and even post-compromise activities. One North Korean group, UNC2970, used Gemini to synthesize open-source intelligence (OSINT) and profile high-value targets [2].
Hackers also abuse the public sharing features of AI platforms like Gemini and OpenAI’s ChatGPT to host deceptive social engineering content. This involves using techniques like 'ClickFix' to trick users into manually executing malicious commands by hosting the instructions on trusted AI domains to bypass security filters.
Expect to see continued cat-and-mouse games between AI developers and malicious actors. As AI models become more powerful, so too does the potential for abuse. Enhanced detection and prevention methods will be critical, as will industry-wide collaboration to share threat intelligence and best practices.
State-backed hackers are leveraging Gemini AI to accelerate and improve existing cyberattack methods, rather than creating entirely new ones. Gemini is being used for tasks like target profiling, open-source intelligence gathering, generating phishing lures, translating text, and even debugging intrusion attempts, making attacks faster and more efficient.
"Model extraction" is when attackers attempt to replicate Gemini's capabilities by sending a large number of prompts to the AI, sometimes over 100,000, to understand its reasoning. The goal is to recreate the model’s functionality and train a separate, potentially malicious AI, which poses a risk to commercial and intellectual property.
State-backed groups from China, Iran, North Korea, and Russia have been identified using Gemini AI for cyberattacks. These groups are employing Gemini for various stages of attacks, including reconnaissance, phishing, and post-compromise activities, to profile targets and synthesize open-source intelligence.
Google has disabled abusive accounts and implemented targeted defenses within Gemini's classifiers to counter the abuse by hackers. They are also continuously testing and relying on safety guardrails to protect against new exploitation methods, but the report suggests that attackers are constantly seeking new ways to exploit the technology.
The primary risk is the increased tempo of attacks; faster attack cycles give defenders less time to react. By using Gemini AI, hackers can streamline and improve their operations, making existing attack methods faster and more efficient, which reduces the window of opportunity for security teams to respond effectively.
More insights on trending topics and technology







