Unveiling the Misuse of Generative AI by Threat Actors

Published On Sat Feb 22 2025
Unveiling the Misuse of Generative AI by Threat Actors

How Threat Actors Use Gemini for Attacks

The Google Threat Intelligence Group (GTIG) has released a new report, “Adversarial Misuse of Generative AI,” in which security experts explain how threat actors are currently using generative AI like Gemini for their attacks. Threat actors are experimenting with Gemini to support their operations and are becoming more productive as a result, but are not yet developing new capabilities. Currently, they are using AI primarily for research, debugging code, and content creation and localization.

Advanced persistent threat (APT) actors used Gemini to support several stages of the attack lifecycle, including researching potential infrastructure and free hosting providers, researching target organizations, vulnerabilities, developing payloads, and assisting with malicious scripts and evasion techniques. Iranian APT actors were the heaviest users of Gemini for various purposes. Russian APT actors used AI to a limited extent during the analysis period.

Adversarial Misuse of Generative AI

Information operations (IO) actors used Gemini for research and content creation, developing personas and messages, translating and localizing content, and finding ways to increase their reach. Iranian IO actors were the heaviest users of Gemini, while Chinese and Russian IO actors mainly used it for general research and content creation.

Security measures prevented the creation of content that would significantly enhance threat actors' capabilities. Gemini assisted with common tasks like content creation, summaries, explaining complex concepts, and simple coding tasks. Requests for more elaborate or explicitly malicious tasks triggered a security response from Gemini.

Threat actors attempted to abuse Google products using Gemini for activities like researching Gmail phishing techniques, data theft, programming a Chrome infostealer, and bypassing Google account verification. However, they were unsuccessful in executing sustained attacks or machine learning-focused threats.

Cybersecurity News: Gemini AI privacy, AI Risk Repository, Russian ...

Instead of developing custom prompts, threat actors resorted to simpler measures or publicly available jailbreak prompts that could not bypass Gemini's security controls.

References: