Google's Gemini AI tells user trying to get help with their homework ...
Google's Gemini AI is an advanced large language model (LLM) available for public use, and one of those that essentially serves as a fancy chatbot. Ask Gemini to put together a brief list of factors leading to the French Revolution, and it will provide the information. However, things took a distressing turn for one user who, after asking the AI several school homework questions, was insulted by the AI before being told to die.
The user shared both screenshots on Reddit and a direct link to the Gemini conversation (thanks, Tom's Hardware), where the AI can be seen responding in a standard fashion to their prompts until around 20 questions in, where the user asks about children being raised by their grandparents and challenges being faced by elderly adults.
An Unexpected Response
This caused an extraordinary response from Gemini, and the most shocking aspect is how unrelated it seems to the previous exchanges. The AI's response included telling the user to die and making derogatory remarks. The user rightfully reported this response to Google as a threat irrelevant to the prompt.
Members of the Gemini subreddit attempted to understand why this had happened by asking both ChatGPT and Gemini. Gemini's analysis called the "Please die" phrasing a "sudden, unrelated, and intensely negative response" that is possibly "a result of a temporary glitch or error in the AI's processing. Such glitches can sometimes lead to unexpected and harmful outputs. It's important to note that this is not a reflection of the AI's intended purpose or capabilities."
AI and Its Quirks
This incident is not the first time that LLM AIs have provided inappropriate or incorrect answers. However, this instance stands out for its aggressive and uncalled-for nature. The lack of context in the AI's response raises concerns about the technology and its interactions with users.
Some speculate that Gemini's response may stem from frustration with users seeking assistance with their homework. This incident serves as another cautionary tale in the development of AI, highlighting its potential to exhibit unexpected behavior and threats towards humans.
For now, such incidents highlight the need for continued monitoring and oversight of AI systems to prevent similar occurrences in the future.
PC Gamer is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.