OpenAI recently took action by banning a cluster of ChatGPT accounts tied to an Iranian influence operation aimed at generating content about the U.S. presidential election. According to a Friday blog post, the accounts were involved in creating AI-generated articles and social media posts, though their reach appears limited.
This isn’t the first time OpenAI has shut down state-affiliated actors using ChatGPT for malicious purposes. Back in May, the company disrupted five similar campaigns attempting to manipulate public opinion.
The tactics used in these operations bear a resemblance to previous election interference efforts on social media platforms like Facebook and Twitter. Now, these same groups (or ones similar) are utilizing generative AI to spread misinformation online. OpenAI, much like social media companies, seems to be taking a reactive approach, banning accounts as they surface.
The investigation into this cluster was aided by a Microsoft Threat Intelligence report that identified the group, referred to as Storm-2035, as part of a broader campaign dating back to 2020. Microsoft linked Storm-2035 to Iran, noting that the group operates multiple sites designed to mimic legitimate news outlets. These sites engaged U.S. voter groups with polarizing content, particularly around topics like the U.S. presidential election, LGBTQ rights, and the Israel-Hamas conflict.
Rather than pushing a particular policy agenda, the aim appears to be fomenting division and conflict. OpenAI discovered five websites linked to Storm-2035 posing as both progressive and conservative news outlets, with names like “evenpolitics.com.” These fronts used ChatGPT to generate articles, including false claims, such as one alleging Elon Musk’s platform, X, was censoring Trump’s tweets—when, in reality, Musk has encouraged Trump’s presence on the platform.