OpenAI disrupts covert Iranian Influence Operation (IO) that was using ChatGPT to spread political propaganda. The AI startup, OpenAI, uncovered a group called Storm-2035 behind the operation, which involved circulating AI-generated political content relating to the US Presidential election. Following the discovery, OpenAI took action by suspending the associated accounts and sharing intelligence with relevant government, campaign, and industry entities. Fortunately, OpenAI noted that there is no evidence suggesting that this content significantly reached its intended audience.
Threat Identification and Tactics
This threat was initially highlighted in a Microsoft Threat Intelligence Report released on August 9, which also pinpointed several other Iranian groups operating in conjunction with Storm-2035. The operation leveraged ChatGPT to produce articles on US politics and global affairs across five news websites.
Additionally, the group distributed content on platforms like X and Twitter in both English and Spanish, focusing on various subjects such as the Gaza conflict, the US election, and Latin American politics. Notably, the actors seemed to play both sides by criticizing figures like Donald Trump and Kamala Harris in different posts.
Similar Tactics and Geopolitical Implications
Similar strategies have been observed before, with Russian propaganda networks employing comparable methods during the 2016 election. By concentrating on divisive cultural issues from different political standpoints, these operations aimed to stoke existing rifts within American society.
OpenAI has a history of tackling AI-based covert activities orchestrated by geopolitical adversaries of the US, having previously identified and shut down five accounts affiliated with state-sponsored threat actors engaged in phishing endeavors originating from countries like China, Russia, Iran, and North Korea.
Collaboration and Military Applications
Despite facing challenges associated with its technology being exploited in geopolitical arenas, OpenAI has shown a willingness to collaborate with the US defense establishment. Earlier this year, the company revised its usage policies to allow for the potential use of its models in military and warfare contexts.
Subsequently, OpenAI has engaged in partnerships with the Pentagon to develop cybersecurity solutions, including the utilization of its AI models for simulated wargames by the US military. Moreover, the appointment of former NSA head Paul Nakasone to the company's board of directors underscores AI's expanding role in military operations, reflecting a broader global trend evident in recent conflicts in regions like Gaza and Ukraine.
While OpenAI's platform remains accessible to users in Brazil, recent events have seen accusations of censorship directed at figures like Supreme Court Justice Alexandre de Moraes. Furthermore, incidents such as the arrest of a BCom student by Kolkata police over an Instagram post allegedly inciting violence against CM Mamata Banerjee signal an escalating crackdown on social media activities.
For more information on Technology Policy in India, visit MEDIANAMA.










