Google Exposes Cybercriminal Efforts to Manipulate AI Chatbot
As artificial intelligence continues to advance, cybercriminals are seeking ways to exploit it for their own gain. A recent report released by Google unveils the attempts made by state-sponsored hacking groups and other threat actors to manipulate its AI chatbot, Gemini, although their endeavors were unsuccessful.
Manipulation Techniques
Google's analysis discovered that hackers made efforts to circumvent Gemini's security measures through basic prompt manipulation methods, such as altering commands or submitting them repeatedly. Despite these tactics failing, the report underscores the increasing interest in AI-driven cyberattacks.
Exploration by Government-Backed Groups
Besides mere jailbreak endeavors, government-affiliated hacking groups have delved into utilizing Gemini for activities like intelligence gathering, vulnerability research, and automated scripting.
Iranian cyber actors have reportedly employed AI in crafting phishing schemes and monitoring defense experts, while Chinese operatives have utilized it for coding assistance and gaining deeper system access. North Korean hackers have incorporated AI into their attack strategies, exploring topics like cryptocurrency and military tactics.
Security Measures
Despite these infiltration attempts, Google confirms that Gemini's security filters successfully thwarted any misuse. However, concerns are escalating as North Korean hackers alone seized $1.3 billion in digital assets in 2024, signifying the anticipated growth of AI's involvement in cyber threats and raising apprehensions about future vulnerabilities.










