Chinese and Iranian hackers have been leveraging ChatGPT and LLM tools to carry out a series of cyberattacks, as confirmed by a recent report from OpenAI. The report reveals that more than twenty cyberattacks have been orchestrated using generative AI, specifically ChatGPT.Insights from the ReportThe malicious activities include spear-phishing attacks, malware development, and other nefarious actions facilitated by generative AI. Notably, the report highlights two significant cyberattacks that employed ChatGPT.Chinese Threat Actors Attack Asian GovernmentsIn November 2024, Cisco Talos reported an attack orchestrated by Chinese threat actors targeting Asian governments. This attack, known as 'SweetSpecter,' utilized a spear-phishing technique involving a malicious ZIP file. Once downloaded and executed, the file triggered an infection chain on the victim's system. OpenAI's investigation revealed that SweetSpecter was crafted using ChatGPT in conjunction with multiple accounts to develop scripts and identify vulnerabilities using LLM tools.'CyberAv3ngers' and 'Storm-0817' CyberattacksAn Iran-based group named 'CyberAv3ngers' utilized ChatGPT to exploit vulnerabilities and pilfer user passwords from macOS-based computers. Another group from Iran, known as Storm-0817, employed ChatGPT to create Android malware. This malicious software was designed to extract contact lists, call logs, browser history, location data, and files from infected devices.Implications and Future ConsiderationsWhile these attacks utilized existing methodologies and did not introduce entirely new forms of malware, they underscore the ease with which threat actors can manipulate generative AI for malicious purposes. Such incidents raise concerns about the potential exploitation of AI services by malicious entities.OpenAI has committed to enhancing its AI capabilities to mitigate such risks, collaborating closely with internal security teams. The company also aims to share its insights with peers and the broader research community to preempt similar occurrences.Securing Generative AI PlatformsGiven the vulnerabilities exposed by these cyberattacks, it is imperative for major players with their generative AI platforms to prioritize security measures. Proactive safeguards are crucial to thwart potential threats before they materialize, emphasizing the importance of preventive strategies over reactive responses.