Back to Articles
AI
|4 min read|

Hackers are using Gemini to target you, Google says

Hackers are using Gemini to target you, Google says
Trending Society

AI Overview

  • State-sponsored hackers are using Gemini AI across various stages of cyberattacks, including…
  • Attackers are attempting "model extraction" to replicate Gemini's capabilities, potentially leading…
  • Google has disabled accounts and implemented defenses within Gemini to counter the abuse, but the…
  • The primary risk is increased tempo: faster attack cycles give defenders less time to react.
Google's Gemini AI isn't just generating marketing copy; it's now a tool in the hands of state-backed hackers. According to a new report from Google's Threat Intelligence Group, malicious actors are using Gemini to accelerate and enhance their cyberattacks, impacting everything from initial reconnaissance to post-compromise activities. This marks a shift in the threat landscape, where AI is actively weaponized to shorten the timeline for attacks and potentially bypass traditional security measures.

Gemini as a Hacker's Assistant

The Google Threat Intelligence Group's report details how hackers are leveraging Gemini, a large language model (LLM), to streamline and improve their operations. This isn't about entirely new attack methods; it's about making existing ones faster and more efficient. The report highlights that hackers are using Gemini for tasks ranging from target profiling and open-source intelligence (OSINT) to generating phishing lures and translating text [1].

Expanding Attack Vectors

The applications are diverse. A China-based actor used Gemini for debugging, research, and technical guidance related to intrusions. Another instance involved a Chinese-linked group creating an expert cybersecurity persona to automate vulnerability analysis and develop targeted test plans [1].

Model Extraction: Cloning Gemini's Brain

Beyond direct usage, Google identified "model extraction" attempts. This involves attackers with authorized API access sending a barrage of prompts—in one case, over 100,000—to replicate Gemini's behavior and reasoning [1]. The goal is to distill (recreate) the model’s functionality to train a separate, potentially malicious AI. This poses a risk to commercial and intellectual property.

Google's Response and Limitations

Google states it has taken action by disabling abusive accounts and implementing targeted defenses within Gemini's classifiers. They are also continuously testing and relying on safety guardrails. However, the report suggests this is an ongoing battle, with attackers constantly seeking new ways to exploit the technology.

"The PRC-based threat actor fabricated a scenario, in one case trialing Hexstrike MCP tooling, and directing the model to analyze Remote Code Execution (RCE), WAF bypass techniques, and SQL injection test results against specific US-based targets," Google said in the report [1].

Who's Using Gemini for Attacks?

The Google Threat Intelligence Group (GTIG) identified state-backed groups from China, Iran, North Korea, and Russia using Gemini [1]. These groups are employing Gemini for reconnaissance, phishing, and even post-compromise activities. One North Korean group, UNC2970, used Gemini to synthesize open-source intelligence (OSINT) and profile high-value targets [2].

Hiding in Plain Sight

Hackers also abuse the public sharing features of AI platforms like Gemini and OpenAI’s ChatGPT to host deceptive social engineering content. This involves using techniques like 'ClickFix' to trick users into manually executing malicious commands by hosting the instructions on trusted AI domains to bypass security filters.

What's Next

Expect to see continued cat-and-mouse games between AI developers and malicious actors. As AI models become more powerful, so too does the potential for abuse. Enhanced detection and prevention methods will be critical, as will industry-wide collaboration to share threat intelligence and best practices.

Why It Matters

  • Tempo: The most significant impact is the accelerated pace of attacks. By automating tasks like vulnerability analysis and phishing lure generation, hackers can significantly reduce the time between initial reconnaissance and actual damage [1].
  • Model Extraction: Attempts to replicate Gemini's capabilities through extensive prompting pose a threat to intellectual property and could lead to the creation of malicious AI models. This "distillation" campaign involved over 100,000 prompts.
  • Evasion: The Honestcue malware leveraged Google Gemini’s API to dynamically generate and execute malicious C# code in memory, showcasing how threat actors exploit AI to evade detection [1].
  • Accessibility: The use of AI democratizes sophisticated attack techniques, potentially enabling less skilled actors to launch more complex campaigns. Nation-state actors are already using it for reconnaissance and social engineering [2].
  • Defensive AI: The rise of AI-powered attacks necessitates the development of equally sophisticated AI-powered defenses. Some companies are already developing AI models for vulnerability scanning, reconnaissance, and automation [1].

FAQ

State-backed hackers are leveraging Gemini AI to accelerate and improve existing cyberattack methods, rather than creating entirely new ones. Gemini is being used for tasks like target profiling, open-source intelligence gathering, generating phishing lures, translating text, and even debugging intrusion attempts, making attacks faster and more efficient.

"Model extraction" is when attackers attempt to replicate Gemini's capabilities by sending a large number of prompts to the AI, sometimes over 100,000, to understand its reasoning. The goal is to recreate the model’s functionality and train a separate, potentially malicious AI, which poses a risk to commercial and intellectual property.

State-backed groups from China, Iran, North Korea, and Russia have been identified using Gemini AI for cyberattacks. These groups are employing Gemini for various stages of attacks, including reconnaissance, phishing, and post-compromise activities, to profile targets and synthesize open-source intelligence.

Google has disabled abusive accounts and implemented targeted defenses within Gemini's classifiers to counter the abuse by hackers. They are also continuously testing and relying on safety guardrails to protect against new exploitation methods, but the report suggests that attackers are constantly seeking new ways to exploit the technology.

The primary risk is the increased tempo of attacks; faster attack cycles give defenders less time to react. By using Gemini AI, hackers can streamline and improve their operations, making existing attack methods faster and more efficient, which reduces the window of opportunity for security teams to respond effectively.

Related Articles

More insights on trending topics and technology

Newsletter

Stay informed without the noise.

Daily AI updates for builders. No clickbait. Just what matters.