Malicious ChatGPT Imposters are a Major Concern for Meta's Security Experts
Meta's security team is concerned about the rising number of malicious ChatGPT imposters that are causing a lot of trouble for its users. These imposters are being used to hijack user accounts and take over company sites. Meta's latest Q1 security report has revealed that malware operators and spammers are following the trends and high-engagement themes that catch people’s interest, and AI chatbots such as ChatGPT are currently the most popular tech trend.
Since March, Meta's security analysts have identified around ten types of malware that are posing as AI chatbot-related technologies such as ChatGPT. These fraudulent ChatGPT are spreading through Facebook advertisements, and some of the malicious apps even use AI to make them appear like authentic chatbots.
The Washington Post recently reported how these fraudulent ChatGPT apps are deceiving people into trying a false version, and Meta has blocked over 1,000 distinct links to the detected malware variants that were spread throughout its platforms. The business has also supplied technical information on how scammers acquire access to accounts, such as hijacking logged-in sessions and keeping access, which is a tactic identical to what took down Linus Tech Tips.
Meta is launching a new assistance channel to help businesses that have been hacked or shut down on Facebook get back up and running. Business pages are typically hacked because malware targets individual Facebook users who have access to them. To counter this, Meta is now delivering new Meta work accounts that enable current and typically more secure single sign-on (SSO) credential services from enterprises that do not relate to a personal Facebook account in any way. Once a business account is migrated, malware such as the bizarre ChatGPT should be much more difficult to attack.