Behind the Scenes of OpenAI's Fight Against Disinformation

Published On Tue Aug 20 2024
Behind the Scenes of OpenAI's Fight Against Disinformation

OpenAI shuts down Iranian influence operation targeting US...

OpenAI recently disclosed that it had taken action against a network of Iranian accounts that utilized its ChatGPT chatbot to conduct a foreign influence campaign aimed at the U.S. presidential election. The operation involved the generation of longform articles and social media comments to sway public opinion.

Disinformation wars: The fight against fake news in the age of AI ...

Suspect Activity

The accounts associated with the Iranian influence operation created content that masqueraded as both liberal and conservative viewpoints. For instance, there were posts insinuating that former President Donald Trump was facing censorship on social media and hinting at his intentions to crown himself as the king of the U.S. Another post suggested that Vice President Kamala Harris' selection of Tim Walz as her running mate was a strategic move for unity.

AI Tools and Disinformation

Ben Nimmo, a principal investigator at OpenAI's Intelligence and Investigations team, noted during a news briefing that the Iranian operation attempted to appeal to both ends of the political spectrum but struggled to attract meaningful engagement. This incident reflects a broader trend where foreign actors are experimenting with AI tools to propagate disinformation.

bne IntelliNews - OpenAI claims it banned “Iranian influence ...

It's worth mentioning that this isn't an isolated case. Earlier, Microsoft uncovered pro-Russian accounts attempting to amplify a fake video depicting violence at the Olympics. Similarly, Meta Platforms Inc. removed numerous Facebook accounts linked to influence operations originating from Iran, China, and Russia, some of which leveraged AI technologies to disseminate false information.

Continued Concerns

The revelation by OpenAI underscores the persistent threat posed by foreign entities seeking to manipulate public discourse, especially in the lead-up to significant events like elections. The U.S. intelligence community has consistently raised alarms about the efforts of countries like Iran, Russia, and China to influence American opinions through deceptive means.

Misinformation reloaded? Fears about the impact of generative AI ...

OpenAI had previously highlighted attempts by networks from Russia, China, Iran, and Israel to exploit its AI products for propaganda purposes. These networks used AI-generated content to create a facade of authenticity, but their campaigns failed to achieve substantial engagement.

As the technological landscape evolves, it is crucial for platforms and authorities to remain vigilant against such malicious activities that aim to subvert the integrity of public discourse.

Source: ©2024 Bloomberg L.P. Visit bloomberg.com. Distributed by Tribune Content Agency, LLC.