ChatGPT raises data theft, hacking risks multi-fold
As more people embrace AI-powered chatbots such as ChatGPT, cybersecurity risks associated with generative AI models have become a pressing concern. While these AI models are designed to facilitate communication and provide helpful responses, experts warn that they pose great risks of hacking and data breaches that could compromise personal information.
Surging ChatGPT-related scams
A report by Palo Alto Networks Unit 42 showed that ChatGPT-related scams are surging despite OpenAI, the creator of ChatGPT, giving users a free version. Scammers lead victims to fraudulent websites, claiming they need to pay for these services. Providing anything sensitive or confidential could put users in danger as scammers might collect and steal the input provided. Moreover, the chatbot's responses could be manipulated to give users incorrect answers or misleading information.
The cybersecurity landscape impacted
AI has long been a part of the cybersecurity industry. However, generative AI and ChatGPT are having a profound impact on the future. The CEO of IT services and consulting company Clover Infotech, Neelesh Kripalani, stated that ChatGPT can impact the cybersecurity landscape through the development of more sophisticated social engineering or phishing attacks. Such attacks are used to trick individuals into divulging sensitive information or taking actions that can compromise their security.
Potential for identity misuse
Aside from cybersecurity risks, it is also vital to understand that ChatGPT may trigger identity misuse of people. In an unusual incident, ChatGPT falsely named an innocent and highly respected law professor in the US on the list of legal scholars who had sexually harassed students in the past as part of a research study. Such occurrences could be detrimental to one's reputation and cause severe mental distress.
The FTC's stance on AI risks
US Federal Trade Commission (FTC) Chair Lina Khan warned that modern AI technologies like ChatGPT can be used to "turbocharge" fraud. She stated that the FTC would take action against market participants who use AI tools effectively designed to deceive people. Several AI researchers, including Twitter CEO Elon Musk and Steve Wozniak, Co-founder of Apple, signed an open letter urging AI labs worldwide to halt the development of large-scale AI systems, citing concerns about the "profound risks to society and humanity" that this software is alleged to pose.
Metal's discovery of malware creators
Metal (formerly Facebook) discovered malware creators who are taking advantage of the public's interest in ChatGPT and using this interest to entice users into downloading harmful applications and browser extensions. The company found around ten malware families posing as ChatGPT and similar tools to compromise accounts across the internet. Metal detected and blocked over 1,000 of these unique malicious URLs from being shared on their apps.
In conclusion, while ChatGPT AI technology is embraced worldwide and puts communication in the hands of many users, the cybersecurity risks associated with it should not be downplayed. It is crucial to take the necessary steps to help minimize these risks and ensure online safety.