Meta warning to look out for ChatGPT-related scams
As soon as ChatGPT, the AI-powered chatbot from Microsoft-backed OpenAI became widely known, scammers quickly took advantage of it. According to a new security report by Meta (formerly known as Facebook), there have been around 10 types of malware posing as ChatGPT and similar AI-based tools since March 2022. The main aim of these scams is to compromise online accounts, especially those of businesses, by tricking users into giving up sensitive information or accepting malicious payloads.
The scams can be delivered through various channels such as web browser extensions, some of which can be found in official web stores. These extensions may offer ChatGPT-related tools and functionalities, but ultimately the intention is to deceive users. Meta's chief information security officer, Guy Rosen, has warned users to be cautious and stay vigilant.
Meta has seen malware masquerading as ChatGPT apps and then switching their lures to other popular products such as Google's AI-powered Bard tool, in a bid to avoid detection. However, Meta has detected and blocked more than 1,000 unique malicious URLs from being shared on its apps and has also reported them to the hosting companies, enabling them to take appropriate action.
To address scammers' new tactics, Meta has promised to continue highlighting how these malicious campaigns function, sharing threat indicators with companies and introducing updated protections. Meta has also launched a new support flow for businesses impacted by malware.
Rosen has warned that the generative AI space is rapidly evolving, and bad actors know it. Hence, everyone should stay vigilant and watch out for potential scams.
What Can ChatGPT do and What Can't it do?
ChatGPT is a brilliant tool, a modern marvel of natural language artificial intelligence that can do incredible things. However, OpenAI has put safeguards in place to prevent it from doing things it shouldn't. Also, it has some limitations based on its design, the data it was trained on, and the sheer limitations of a text-based AI.
There are, of course, differences between what GPT-3.5 can do compared to GPT-4, which is only available through ChatGPT Plus. Some of those things are on hold while it develops further, but there are some things ChatGPT may never be able to do. Below is a list of 11 things that ChatGPT can't or won't do[1] - for now:
- It can't write about anything after 2021
OpenAI Chases down a Computer Science Student over GPT-4 Project
OpenAI, the nonprofit research group turned for-profit company, threatened to sue a computer science student Xtekky, over an open-source GPT-4 project called GPT4free on Github. GPT4free is an open-source project from a European computer science student. The tool pings various websites that use GPT-4, and allows users to interact with GPT-4 without paying for OpenAI's ChatGPT Plus service. Xtekky said he created the project to help people who can't afford the ChatGPT Plus subscription.
Microsoft's Responsible AI Strategy
Microsoft recently laid out how it intends to keep its future AI efforts responsible and in check going forward after disbanding the AI Ethics & Society team in March 2023. Natasha Crampton, the Chief Responsible AI Officer at Microsoft, explained in a blog post that the ethics team was disbanded because "a single team or a single discipline tasked with responsible or ethical AI was not going to meet our objectives."
Microsoft is committed to developing AI responsibly and to ensuring that its AI technologies are transparent, respectful, and secure. It will engage with those who could be most affected by its AI technology and produce strong documentation for accountability and transparency. In addition, Microsoft will ensure that its AI technology is consistent with its ethical principles, laws, and regulations.
Upgrade your lifestyle with Digital Trends. Stay up to date with the latest news, product reviews, editorials, and sneak peeks of the tech world.
[1] Source: ZDNet