ChatGPT blocked 250,000 AI image requests of US election candidates
ChatGPT declined over 250,000 requests to produce images of the US election candidates via its artificial intelligence (AI) platform. The company behind the AI chatbot, OpenAI, revealed in a recent blog post that their platform DALL-E, utilized for creating images and videos, refused requests to generate images of president-elect Donald Trump, his chosen vice president JD Vance, current president Joe Biden, democratic candidate Kamala Harris, and her vice-presidential pick, Tim Walz.
Safeguarding Against Misuse
According to OpenAI's blog update, these refusals were a result of the "safety measures" put in place before election day. OpenAI emphasized the importance of these guardrails, particularly in an electoral context, as part of their broader strategy to prevent the misuse of their tools for deceptive or harmful purposes.
Safeguarding Against Misuse
According to OpenAI's blog update, these refusals were a result of the "safety measures" put in place before election day. OpenAI emphasized the importance of these guardrails, particularly in an electoral context, as part of their broader strategy to prevent the misuse of their tools for deceptive or harmful purposes.
Preventing Deceptive Operations
The teams at OpenAI stated that they have not identified any instances of US election-related influence operations achieving widespread impact through their platforms. The company previously halted an Iranian influence campaign known as Storm-2035 from generating articles about US politics, with the accounts associated with this campaign subsequently being banned from using OpenAI's platforms.
Safeguarding Against Misuse
According to OpenAI's blog update, these refusals were a result of the "safety measures" put in place before election day. OpenAI emphasized the importance of these guardrails, particularly in an electoral context, as part of their broader strategy to prevent the misuse of their tools for deceptive or harmful purposes.
Safeguarding Against Misuse
According to OpenAI's blog update, these refusals were a result of the "safety measures" put in place before election day. OpenAI emphasized the importance of these guardrails, particularly in an electoral context, as part of their broader strategy to prevent the misuse of their tools for deceptive or harmful purposes.
Preventing Deceptive Operations
The teams at OpenAI stated that they have not identified any instances of US election-related influence operations achieving widespread impact through their platforms. The company previously halted an Iranian influence campaign known as Storm-2035 from generating articles about US politics, with the accounts associated with this campaign subsequently being banned from using OpenAI's platforms.
Safeguarding Against Misuse
According to OpenAI's blog update, these refusals were a result of the "safety measures" put in place before election day. OpenAI emphasized the importance of these guardrails, particularly in an electoral context, as part of their broader strategy to prevent the misuse of their tools for deceptive or harmful purposes.
Related Links:
Learn more about OpenAI's initiatives:
- OpenAI launches ChatGPT-powered search engine, putting it in competition with Google
- OpenAI's efforts in disrupting influence operations
- OpenAI's October Threat Intelligence Report










