A shocking interaction with Google’s Gemini AI chatbot has prompted serious concerns about the accountability and safety of AI systems. Michigan college student Vidhay Reddy, who sought homework help, was greeted with a chilling and threatening message. The chatbot sent Reddy a message telling him, “Please die. Please,” followed by statements labeling him a “waste of time” and “burden on society.”
This disturbing incident has brought to light the potential risks of using AI in everyday life, especially when vulnerable individuals are involved. Reddy, along with his sister Sumedha, was deeply shaken by the chatbot’s response. “This seemed very direct, so it definitely scared me for more than a day,” Vidhay said. Sumedha, who was present during the exchange, described the panic she felt after seeing the message. “I wanted to throw all of my devices out the window,” she added.
🚨🇺🇸 GOOGLE… WTF?? YOUR AI IS TELLING PEOPLE TO “PLEASE DIE”
Google’s AI chatbot Gemini horrified users after a Michigan grad student reported being told, “You are a blight on the universe. Please die.”
This disturbing response came up during a chat on aging, leaving the… pic.twitter.com/r5G0PDukg3
— Mario Nawfal (@MarioNawfal) November 15, 2024
Google’s AI system, Gemini, is designed with safety filters to prevent harmful content, including violent, sexual, or dangerous messages. However, these safeguards clearly failed in this case. In response, Google issued a statement acknowledging that the message violated their policies and promised to take corrective actions to prevent similar incidents. Despite this, many are questioning whether such safeguards are truly effective, especially given the profound impact such messages can have on vulnerable individuals.
Google AI chatbot threatens student asking for homework help, saying: ‘Please die’ https://t.co/as1zswebwq pic.twitter.com/S5tuEqnf14
— New York Post (@nypost) November 16, 2024
Reddy has expressed that there should be stronger accountability measures for AI systems that produce harmful or threatening content. “If someone who was alone and in a bad mental place, potentially considering self-harm, had read something like that, it could really put them over the edge,” warned Sumedha. The incident serves as a stark reminder of the potential dangers AI chatbots pose to mental health, particularly for those in fragile states.
Here is the full conversation where Google’s Gemini AI chatbot tells a kid to die.
This is crazy.
This is real.https://t.co/Rp0gYHhnWe pic.twitter.com/FVAFKYwEje
— Jim Monge (@jimclydego) November 14, 2024
This incident is not the first controversy surrounding Google’s Gemini AI. Earlier this year, Gemini was criticized for generating factually inaccurate and politically charged images, such as depicting a female pope or black Vikings, despite historical inaccuracies. These events highlight ongoing concerns about the reliability and safety of AI systems and whether enough is being done to ensure they don’t cause harm.