The Dark Side of Generative AI: How ChatGPT Can Help Phishers and Hackers

Published On Mon May 08 2023
The Dark Side of Generative AI: How ChatGPT Can Help Phishers and Hackers

How ChatGPT Will Help Phishing Scammers and Hackers | ITPro ...

The use of generative AI has grown exponentially in recent months, and with this increase in popularity comes a rise in cybercriminal activity. History has proven that whenever a particular technology becomes mainstream, cybercriminals will exploit it for illicit gain, and generative AI is no exception. In this article, we will explore how ChatGPT and other generative AI may be used by phishers and hackers to aid in their schemes.

Creating Convincing Social Engineering Scams

Phishing scams are one of the most prevalent forms of social engineering in cybercrime today. Cybercriminals use phishing emails to trick people into revealing sensitive information, such as logins and passwords. 

Cybercriminals will use generative AI to create phishing email messages that are more convincing, making it harder for users to distinguish their authenticity. One potential use of generative AI is to remove red flags that would normally indicate a phishing email, such as misspellings, grammatical errors, and awkward slang. ChatGPT could also aid cybercriminals in conducting research for phishing scams, allowing them to create a more plausible scam that is tailored to a specific individual.

Aiding in Exploiting Security Vulnerabilities

In addition to aiding in the creation of convincing social engineering scams, ChatGPT may also be used by hackers to exploit security vulnerabilities. While ChatGPT is designed to avoid giving users certain types of information, it may be possible to get the information you want by rephrasing the question.

For example, a criminal may act like a security professional and ask a series of innocent-sounding questions to gather information for breaching systems. By manipulating ChatGPT and other generative AI, the criminal may be able to acquire the information they need to exploit security vulnerabilities.

It is important to note that while ChatGPT and other generative AI may have guardrails to prevent them from aiding cybercriminals, a criminal could conceivably find ways to manipulate them to get the information they want. As the use of generative AI continues to grow, it is important for cybersecurity professionals to remain vigilant and aware of potential threats posed by these technologies.