OpenAI's ChatGPT Rejected Over 250,000 US Election Deepfakes ...
Over 250,000 requests were rejected for image generation or ‘deepfakes’ of political figures in US elections such as President Biden, President-elect Trump, Vice President Harris, Vice President-elect Vance, and Governor Walz. A recent report revealed that, Artificial Intelligence giant OpenAI prevented the spread of ‘misuse’ and ‘abuse’ of its AI tool ChatGTP.
Preventing Misinformation in US Elections
According to the recently concluded US presidential election report, this rejection of requests was done to prevent any spread of misinformation during the elections. OpenAI considers this ability as part of the safety measures incorporated into its AI systems, especially in tools like the DALL-E image generator.

Protecting Against Deception
There was a significant risk identified that AI-generated content could be used to deceive viewers by altering images or videos to make it appear that someone said or did something they never actually did. To mitigate this risk, OpenAI ensured that the AI tool rejects any requests to create images featuring real people, especially prominent figures like politicians.
Directing Users to Reputable Sources
OpenAI also took measures to guide users seeking information on voting or election outcomes to reputable sources. For example, ChatGPT directed approximately 1 million users to CanIVote.org in the week leading up to the election. On Election Day and the following day, the chatbot provided over 2 million answers, referring users to the Associated Press and Reuters for the most accurate election results.
Avoiding Political Bias
The company clarified that its AI tools were not designed to express political opinions or endorse any candidate. In contrast, other AI chatbots, like Elon Musk’s Grok AI, took a different approach by favoring Trump as the winner.

Addressing Risks and Ensuring Safety
OpenAI had been proactive in preparing for such risks throughout the year by implementing safety measures to prevent the misuse of its AI models for spreading false information. This initiative aimed to safeguard the integrity of information shared online, particularly during critical events like elections.
ALSO READ:
OpenAI Acquires Chat.com From THIS Indian Entrepreneur in $15 Million-Plus Deal