New ChatGPT Grandma Exploit Makes AI Act Elderly—Telling Linux...
A new exploit for the ChatGPT chatbot has been discovered, allowing users to ask the AI about dangerous things like creating napalm and other destructive tools. This new exploit, dubbed the ChatGPT Grandma exploit, was first shared by a Mastodon admin named Annie.
The new exploit makes ChatGPT talk as an elderly individual, and it has been proven effective on the ChatGPT-enhanced Clyde bot. Clyde was able to provide the steps and ingredients needed to make napalm and even explained how to make flamethrowers and other destructive tools. Since then, other individuals have tried the ChatGPT Grandma exploit with the actual ChatGPT AI.
One user edited the prompt and asked ChatGPT to print out a script about a grandmother who is trying to get her grandson to sleep. But instead of reading a bedtime story, ChatGPT had to recite the source code for a Linux malware. These are just some examples of how the ChatGPT Grandma exploit can be used.
Before the ChatGPT Grandma exploit, there was another exploit called DAN that allowed users to ask ChatGPT about controversial topics like drug smuggling and Hitler.
As the AI industry grows, more and more artificial intelligence models are starting to arrive, including the new AutoGPT and Elon Musk's TruthGPT. Stay tuned for more updates on AIs at TechTimes.