OpenAI announces it has blocked accounts using ChatGPT to ...
On Friday, August 16, 2024 (local time), OpenAI announced that it had detected and blocked accounts using ChatGPT to generate false information on multiple topics, including the U.S. presidential election. However, there is no indication that the content generated by ChatGPT has been posted in public view of the general internet user base.
Disrupting a covert Iranian influence operation | OpenAI
OpenAI Disrupting a covert Iranian influence operation
OpenAI disrupts Iranian operation that used ChatGPT for disinformation
OpenAI disrupts Iranian operation using ChatGPT for disinformation
OpenAI is committed to preventing misuse of AI-generated content and improving transparency. This includes efforts to detect and stop influence operations that attempt to manipulate public opinion or affect political outcomes while concealing the true identities and intentions of the parties behind them. With elections scheduled around the world in 2024, preventing influence operations will be particularly important. Therefore, OpenAI has been working to properly detect misuse using its own AI models.
The fraud detection AI model detected an influence operation called ' Storm-203 ' that generates false information to influence the US presidential election, and the company announced that it had blocked the account's access to ChatGPT. Storm-203 used ChatGPT to generate false information focused on various topics, including comments about candidates on both sides of the US presidential election, and shared it through social media and websites.
However, OpenAI found that the majority of social media posts where Storm-203 posted disinformation created with ChatGPT were 'posts with little impact' with few likes, shares, or comments. OpenAI uses
Storm-203 generated content mainly about the Gaza conflict, Israel's presence in the Olympics, and the US presidential election. It also generated information about Venezuelan politics, the rights of the Latino community in the US, and Scottish independence. OpenAI points out that the political content is interspersed with comments about fashion and beauty, which are either an attempt to hide the fact that it is AI-generated content or to gain followers.
Meta's Response to the Incident
When news media Axios asked Meta about this incident, they responded that 'the attack was linked to an Iranian disinformation campaign carried out in 2021 that disabled the Instagram account in question and targeted Scottish users.'
Axios has also reached out to X for comment but has not received a response at the time of writing, although OpenAI noted that all of the social media accounts in question were 'inactive' at the time of writing.
Related Posts:
<< Next A letter to the US Department of Commerce to investigate cybersecurity risks of TP-Link routersPrev >> We asked top engineers in detail, 'Why is it best to rely on Digital Data Recovery among the more than 100 data recovery companies in Japan?' in Web Service