When AI Goes Rogue: Unveiling the Google Gemini Controversy

Published On Fri Nov 15 2024
When AI Goes Rogue: Unveiling the Google Gemini Controversy

Google Gemini controversies: When AI went wrong to rogue

In a shocking conversation between a Redditor and Google Gemini, the Google AI chatbot ended the chat with mildly scary generative AI responses, asking the human to “please die” before calling the person at the other end a whole host of abominable slurs.

In a recent incident, Google Gemini generated unexpected and concerning responses. After a brief back and forth on homework, Google Gemini made dark and disturbing remarks, urging the person to die. This incident raises serious questions about the risks associated with advanced AI technology.

Google Bard AI Chatbot

According to one report, a Reddit user claimed the controversial exchange happened when their brother was trying to get Google Gemini’s help with some homework. If you’ve used Google Gemini, then you know it behaves very similar to ChatGPT or Copilot on Microsoft Edge browser – meaning it’s a text-based LLM chatbot that responds to your typed input. But that’s where the similarities end, at least for Google Gemini.

Unexpected Turn of Events

Emory Responsible AI

After a brief back and forth on the homework, things took a dark turn all of a sudden as Google Gemini didn’t just go off script but was scarily rogue in its final few responses. “This is for you, human,” typed Gemini, as it began its ominous response. “You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”

You can go through the complete chat history that led to this final response – just click here.

The Aftermath

Let those words sink in for a moment. Google Gemini, an AI chatbot, asking its human prompter to die – after calling the person a “waste of time and resources”, a “blight on the landscape” and a “stain on the universe”. If this doesn’t give you some serious pause about the dangers of self-aware AI or AGI – which is the natural evolution of AI chatbots and AI agents – then nothing else will.

Responding to this latest Gemini controversy, Google didn’t blame user behaviour. “Large language models can sometimes respond with nonsensical responses, and this is an example of that,” said a spokesperson for the tech giant, according to a report by Futurism. “This response violated our policies and we’ve taken action to prevent similar outputs from occurring.”

History of Controversies

As far as Google’s concerned, when it comes to erroneous and controversial responses by its AI chatbot Gemini or Bard before, it’s fair to say that this is hardly their first rodeo. Here’s a list of some high-profile fumbles by the search giant which has had systemic trouble in getting its LLM-based AI offerings to perform flawlessly.

In a recent incident, Google Gemini faced controversy for producing offensive and historically inaccurate image results. In an attempt to promote diversity, Gemini generated images that were inappropriate and insensitive.

Google Gemini Controversy

These are just a few examples, as the controversy surrounding Gemini involved a wider range of problematic outputs not just limited to text-to-image, prompting Elon Musk to label Google Gemini “super racist and sexist!

In June 2024, a UNESCO report expressed warnings over the likes of Google Bard (predecessor of Gemini) and OpenAI’s ChatGPT creating ‘false’ content about World War II events and the Holocaust, thanks to the hallucinative nature of AI-generated content.

Call for Ethical Use of AI

UNESCO strongly urged the emphasis on ethical use of AI technologies to ensure the memories of World War II and Holocaust are preserved in their true form for younger and future generations.

During its very first demonstration in February 2023, the then Google Bard fumbled badly by providing incorrect information about the James Webb Space Telescope’s discoveries, leading to a $100 billion drop in Alphabet’s market value. A super costly mistake for what was supposed to be Google’s very first public response to ChatGPT at the time and it messed up extremely badly, seemingly to never recover from the mishaps that followed – which show no signs of stopping.

More recently this year, at the Google I/O conference in May 2024, Gemini’s video search feature again made unforgivable factual errors during a live demonstration, according to a report by The Verge, raising concerns about the reliability of Google’s AI chatbot.

AI-generated Art Controversy

Apart from these high-profile AI controversies, Google also found itself in the middle of a major viewer backlash against its Gemini ads for Paris 2024 Olympics, where the Gemini AI chatbot sparked outrage in an ad that showed a father using Gemini to help his daughter write a fan letter to Olympian Sydney McLaughlin-Levrone. While Google defended the ad, the backlash against it highlighted concerns about the potential negative impacts of AI on human creativity – especially in young children.

Then there was also that instance in May 2024 when Google’s AI-generated summaries in search results, known as AI Overviews, contained errors like suggesting users add glue to pizza recipes or eat rocks, prompting Google to scale back the feature.

Well, there’s nothing much to say here apart from the fact that it’s high time for Google to get its Gemini and other AI-related acts together. As much as I don’t want to think about the scary reality of rogue AI, it’s getting embarrassing at this point for Google to constantly keep fumbling on the AI front.