OpenAI takes down covert operations tied to China and other countries
Chinese propagandists have been found using ChatGPT to write posts and comments on various social media platforms, as well as creating performance reviews for their supervisors, as discovered by researchers at OpenAI. This revelation sheds light on how artificial intelligence technology is being exploited for covert operations and propaganda purposes.

Covert Operations by China
OpenAI researchers revealed that in the last three months, they have thwarted 10 operations utilizing AI tools for malicious intents, with a significant portion of them originating in China. These operations targeted a wide array of countries and topics, demonstrating a sophisticated approach that combined elements of influence operations, social engineering, and surveillance.
One such operation, dubbed "Sneer Review" by OpenAI, employed ChatGPT to generate comments and posts across platforms like TikTok, Reddit, and Facebook, in multiple languages including English, Chinese, and Urdu. The operation manipulated discussions around sensitive topics such as the Trump administration and a Taiwanese game critical of the Chinese Communist Party, aiming to influence opinions by creating a facade of organic engagement.

Moreover, the actors behind Sneer Review also utilized ChatGPT to draft internal documents, including a detailed performance review outlining the operation's strategies and execution, mirroring the social media behaviors observed during the investigation.
Intelligence Collection and Deceptive Practices
Another operation linked to China focused on intelligence gathering by posing as journalists and analysts. This operation leveraged ChatGPT for writing posts, creating account biographies, translating messages, and analyzing data. Additionally, the operation claimed to engage in fake social media campaigns to recruit intelligence sources, aligning its online activities with the content generated using AI.

OpenAI's latest report also highlighted disruptions in covert influence operations originating from Russia, Iran, and other countries, emphasizing the diverse tactics and platforms used by these operations. Despite the use of advanced AI tools, many of these operations were intercepted in their early stages, preventing significant outreach to real audiences.
"It is worth acknowledging the sheer range and variety of tactics and platforms that these operations use, all of them put together," remarked an OpenAI investigator. While AI technology offers new capabilities for malicious actors, the outcomes of these covert operations are not solely determined by the sophistication of the tools employed.
For more insights on foreign influence operations and AI, you can contact Shannon Bond through encrypted communications on Signal.