Google reveals Gemini AI use by more than 40 state-sponsored advanced persistent threat actors
Google’s Threat Intelligence Group (GTIG) disclosed that more than 40 state-sponsored advanced persistent threat actors (APTs) from countries like Iran, China, North Korea, and Russia, along with 16 other nations, have been utilizing Google’s Gemini AI tools. These threat actors employed the Gemini large language model (LLM) across various stages of the attack life cycle, as mentioned in a recent blog post by GITG.
The researchers highlighted that while the use of Gemini resulted in "productivity gains," it did not lead to the development of "novel capabilities."
Findings Consistent with Previous Reports
This revelation aligns with a report published by Microsoft and OpenAI last year, which indicated that state-sponsored actors from Iran, China, North Korea, and Russia had used ChatGPT in a limited capacity for activities such as scripting, phishing, vulnerability research, target reconnaissance, and post-exploitation tasks.
Report by Microsoft and OpenAI
Usage Patterns by Different Threat Actors
According to Google’s report, Iranian threat actors were the most active users of Gemini for hacking and influence operations, while Russian actors made limited use of the AI tool. North Korean threat actors focused on activities related to the country's IT worker campaign, including reconnaissance on international companies and malware development.
China's Involvement
Over 20 China-backed groups utilized Gemini to streamline their hacking endeavors, particularly to gather information on U.S. critical infrastructure, Windows exploits, and methods for lateral movement across compromised systems.
Identified Attack Life Cycle Segments
GTIG identified seven key segments in the attack life cycle, which include victim reconnaissance, tool weaponization, payload delivery, vulnerability exploitation, malware installation, command and control communications, and actions to achieve adversarial objectives.
Diverse Reconnaissance Efforts
Reconnaissance efforts varied across APTs from different countries, with each focusing on different targets and objectives. Iranian threat actors targeted defense and government organizations, while Chinese APTs were interested in U.S. IT providers and military personnel.
Request for Coding Assistance and Research
Threat actors sought help in areas like malware development, exploitation of tools and vulnerabilities, phishing techniques, and post-compromise activities. Google noted that Gemini's safeguards prevented the AI from complying with malicious requests related to the exploitation of Google tools like Gmail and Chrome.
Continued Efforts to Strengthen Security
GTIG emphasized the importance of sharing its findings with the public, industry partners, and law enforcement to combat the misuse of AI tools. Google reiterated its commitment to enhancing the security of its AI models based on the insights gained from adversarial activities discovered on its platform.