10 Alarming Uses of ChatGPT Unveiled by OpenAI

Published On Sat Feb 22 2025
10 Alarming Uses of ChatGPT Unveiled by OpenAI

OpenAI Bans Accounts Misusing ChatGPT for Surveillance and...

OpenAI recently revealed that it has taken action to ban a set of accounts that misused its ChatGPT tool in the development of a suspected AI-powered surveillance tool. The social media listening tool, suspected to originate from China and powered by one of Meta's Llama models, was used by these accounts to generate detailed descriptions and analyze documents for collecting real-time data concerning anti-China protests in the West and sharing the findings with Chinese authorities.

Peer Review Campaign

The operation, known as Peer Review, was flagged by researchers Ben Nimmo, Albert Zhang, Matthew Richard, and Nathaniel Hartley for its promotion and review of surveillance tooling. The tool is designed to ingest and analyze content from various platforms such as X, Facebook, YouTube, Instagram, Telegram, and Reddit. In one instance, the actors debugged and modified source code related to the monitoring software known as "Qianyue Overseas Public Opinion AI Assistant" using ChatGPT.

Generate UDM search queries with Gemini

Aside from using ChatGPT to gather information on think tanks in the United States and government officials from countries like Australia, Cambodia, and the United States, the accounts were found to utilize the tool to read, translate, and analyze English-language documents, including screenshots of announcements for Uyghur rights protests in Western cities. The authenticity of these images remains unknown.

Disruption of Malicious Activities

OpenAI also disclosed that it disrupted several other clusters misusing ChatGPT for various malicious activities. This action is part of a broader trend where AI tools are exploited by malicious actors to enable cyber-enabled disinformation campaigns and other nefarious operations.

OpenAI emphasized the importance of collaboration between AI companies, upstream providers, downstream distribution platforms, and researchers in combating such threats. Sharing insights into threat actors can enhance detection and enforcement measures, providing a more robust defense against malicious activities.

Overall, the incidents involving the misuse of AI tools highlight the need for vigilance and collaboration within the tech industry to address emerging security challenges.

For more information, you can refer to the original OpenAI announcement and the research on cyber-enabled disinformation campaigns mentioned in the post above.