An AI-powered bot army on X spread pro-Trump and pro-GOP propaganda
An army of political propaganda accounts powered by artificial intelligence posed as real people on X to argue in favor of Republican candidates and causes, according to a research report out of Clemson University. The report details a coordinated AI campaign using large language models (LLM) — the type of artificial intelligence that powers convincing, human-seeming chat bots like ChatGPT — to reply to other users.
An illustration of AI in action:
While it’s unclear who operated or funded the network, its focus on particular political pet projects with no clear connection to foreign countries indicates it’s an American political operation, rather than one run by a foreign government, the researchers said. As the November elections near, the government and other watchdogs have warned of efforts to influence public opinion via AI-generated content. The presence of a seemingly coordinated domestic influence operation using AI adds yet another wrinkle to a rapidly developing and chaotic information landscape. The network identified by the Clemson researchers included at least 686 identified X accounts that have posted more than 130,000 times since January.
Coordinated Political Manipulation
It targeted four Senate races and two primary races and supported former President Donald Trump’s re-election campaign. Many of the accounts were removed from X after NBC News emailed the platform for comment. The platform did not respond to NBC News’ inquiry. The accounts followed a consistent pattern. Many had profile pictures that appealed to conservatives, like the far-right cartoon meme "Pepe the frog," a cross, or an American flag. They frequently replied to a person talking about a politician or a polarizing political issue on X, often to support Republican candidates or policies or denigrate Democratic candidates.
Fake accounts and bots designed to artificially boost other accounts have plagued social media platforms for years. But it’s only with the advent of widely available large language models in late 2022 that it has been possible to automate convincing, interactive human conversations at scale. “I am concerned about what this campaign shows is possible,” Darren Linvill, the co-director of Clemson’s Media Hub and the lead researcher on the study, told NBC News. “Bad actors are just learning how to do this now. They’re definitely going to get better at it.”
Targeted Support and Opposition
An image of large language model development:
The accounts took distinct positions on certain races. In the Ohio Republican Senate primary, they supported Frank LaRose over Trump-backed Bernie Moreno. In Arizona’s Republican congressional primary, the accounts supported Blake Masters over Abraham Hamadeh. Both Masters and Hamadeh were supported by Trump over four other GOP candidates. The network also supported the Republican nominee in Senate races in Montana, Pennsylvania, and Wisconsin, as well as North Carolina’s Republican-led voter identification law.
Detection and Response
Another look at political manipulation:
A spokesperson for Hamadeh, who won the primary in July, told NBC News that the campaign noticed an influx of messages criticizing Hamadeh whenever he posted on X, but didn’t know who to report the phenomenon to or how to stop them. X offers users an option to report misuse of the platform, like spam, but its policies don’t explicitly prohibit AI-driven fake accounts. The researchers determined that the accounts were in the same network by assessing metadata and tracking the contents of their replies and the accounts that they replied to — sometimes the accounts repeatedly attacked the same targets together.