The New Risks ChatGPT Poses to Cybersecurity
The FBI's 2021 Internet Crime Report identified phishing as the most common IT threat in America. With OpenAI's ChatGPT, hackers now have a new tool to bolster their phishing campaigns, compromising advanced cybersecurity software. This poses a significant risk to the sector, especially considering the significant increase in data breaches in 2022. In this article, we examine the new risks ChatGPT poses to cybersecurity, explore the action required from cybersecurity professionals to address these risks, and call for government oversight to ensure AI usage doesn't compromise cybersecurity efforts.
New Risks from ChatGPT
While older versions of language-based AI have been available to the general public, ChatGPT is the most advanced iteration to date. With its unmatched ability to converse so seamlessly with users without making spelling, grammatical, and verb tense mistakes, ChatGPT can fool users into thinking there is a real person on the other end of the chat window. This development is a game-changer for hackers, giving them a near-fluency in English to bolster their phishing campaigns and breach advanced cybersecurity software.
Action Required
As ChatGPT continues to be used by bad actors for phishing purposes, it is crucial for cybersecurity professionals to equip their IT teams with the necessary tools that distinguish between human and ChatGPT-generated messages, especially for incoming "cold" emails. Moreover, cybersecurity staff should be routinely trained and retrained on the latest cybersecurity prevention and awareness skills, with a specific focus on AI-supported phishing scams. The wider public and the sector need to continue advocating for advanced detection tools alongside these training programs.
Furthermore, cybersecurity professionals need the proper training and resources to respond to ever-growing AI-generated threats, including code generation and other computer programming tools. The training should focus on using AI technology as an essential tool in their processes.
Government Oversight
Finally, the potential for ChatGPT itself to be hacked is a significant concern that necessitates government oversight of OpenAI and other companies that release generative AI products. The Biden administration has already released a "Blueprint for an AI Bill of Rights," which needs to be expanded to ensure companies launching generative AI products are regularly reviewing their security features and implementing minimum-security measures before open sourcing AI models. This necessary action would reduce the risk of hacking and ensure that ChatGPT and other generative AI products do not become a dangerous propaganda machine.