Unveiling the Dark Side of ChatGPT: The OpenAI Report

Published On Mon Oct 14 2024
Unveiling the Dark Side of ChatGPT: The OpenAI Report

Hackers Misusing ChatGPT To Write Malware: OpenAI Report

OpenAI, the company behind the popular AI chatbot ChatGPT, has recently revealed a concerning trend. Over the course of 2024, they have uncovered and disrupted more than 20 operations conducted by deceptive networks worldwide. These operations have been linked to state-sponsored hackers from Iran and China.

Security Archives - OpenText Blogs

AI-Powered Malicious Activities

In a report published on Wednesday, OpenAI disclosed that these operations involved the utilization of AI-powered chatbot, ChatGPT, for various malicious activities. These activities range from debugging malware to generating content for websites and spreading disinformation on social media.

The report, authored by researchers Ben Nimmo and Michael Flossman, highlights the intermediate use of AI by threat actors. They mention that AI models were employed in the phase between acquiring basic tools like internet access and social media accounts, and deploying the final products such as malware or social media posts.

New DHS threat assessment report sounds alarm on cyber attacks

Key Threat Groups

The OpenAI report sheds light on three significant threat groups that have exploited ChatGPT for cyberattacks:

  • SweetSpectre: This China-linked group utilized ChatGPT for various purposes like reconnaissance, vulnerability research, and scripting support.
  • CyberAv3ngers: Linked to Iran’s Islamic Revolutionary Guard Corps, this group targeted industrial control systems (ICS) using ChatGPT for research and coding support.
  • STORM-0817: An Iran-based threat actor used ChatGPT for creating malware tools and developing malicious code. OpenAI noted the group's attempt to exploit the chatbot for malicious activities.

AI's Role in Cyberattacks

Although these threat groups attempted to leverage ChatGPT for their malicious schemes, OpenAI clarified that AI did not provide them with significant new capabilities for developing malware. The hackers could only achieve incremental advantages, which were already attainable through existing non-AI tools.

AI-Powered Malware Holds Potential For Extreme Consequences

OpenAI's Response

Despite the involvement of AI in cyberattacks, OpenAI has taken steps to identify and disrupt such activities. The company stated its commitment to collaborating with various teams internally and externally to anticipate and prevent the misuse of advanced models for harmful purposes.

As the landscape of cyber threats evolves, OpenAI remains vigilant in ensuring the safety and security of AI technologies and their applications.

Latest Events and Webinars - Phishing Protection | SlashNext