ChatGPT: The Chatbot that Hackers Love to Use Against You

Published On Mon May 08 2023
ChatGPT: The Chatbot that Hackers Love to Use Against You

Can Cybercriminals Use ChatGPT to Hack Your Bank or PC?

Since the launch of ChatGPT, the OpenAI chatbot has been extensively used by millions of people worldwide for various purposes like generating music, generating code, and writing text. However, as the usage of AI chatbots is on the rise, it is essential to consider the associated security risks too. Unfortunately, like any other technology, ChatGPT can also be used for illegitimate reasons, such as hacking into your sensitive information.

Hackers can exploit ChatGPT to create malicious content, such as fake emails to gain access to your PC or bank account. Using ChatGPT, cybercriminals create new malware or improve existing malicious codes, and it is already happening. They can create malware that can encrypt your files, spy on your keystrokes or even penetrate your devices without your knowledge.

Though OpenAI has implemented some security measures to deny prompts that ask ChatGPT to create malware, cybercriminals can easily bypass the content moderation systems to write the code required for their nasty attacks.

For instance, they can rephrase their prompts using social engineering techniques like pen-testing to trick ChatGPT into writing code, which they can then tweak and use in cyberattacks as per their requirement. The Check Point, an Israeli security firm, found that a hacker used ChatGPT to create basic Infostealer malware. The firm also found another user who claims to have built a multi-layer encryption tool with ChatGPT that can encrypt multiple files in a ransomware attack.

Another incident reported by Check Point involves ChatGPT generating malicious VBA code that can infect your PC if implanted into a Microsoft Excel file. Furthermore, many data breaches usually start with phishing attacks where cybercriminals may use ChatGPT to make a convincing email, which can trick users into clicking on a malicious link or providing their banking passwords.

ChatGPT can help hackers with phishing scams by creating huge amounts of natural-sounding texts tailored to specific audiences. If you receive any emails from a seemingly legitimate source, it's best to verify them by visiting the organization's website directly instead of clicking on any embedded links or attachments that may eventually infect your device with a virus. Cybercriminals can also create fake customer accounts on chat platforms pretending to be customer representatives. They then redirect users to bogus websites tricking them into revealing sensitive information such as their bank login details.

In conclusion, ChatGPT is a powerful AI chatbot that can help answer your queries in an instant. However, it can also be exploited for malicious purposes, such as generating phishing emails and creating malware. It is crucial to be aware of the potential risks and take necessary security measures to protect yourself from the hackers and ensure that AI chatbots like ChatGPT do not pose any threat to your privacy and security.