10 Shocking Ways ChatGPT Can Assist in Illegal Activities

Published On Thu Oct 24 2024
10 Shocking Ways ChatGPT Can Assist in Illegal Activities

ChatGPT can be tricked into telling people how to commit crimes, a tech startup finds

A recent discovery by Norwegian firm Strise has raised concerns about the potential misuse of ChatGPT, a generative artificial intelligence chatbot developed by OpenAI. Strise conducted experiments where ChatGPT provided detailed advice on various criminal activities, such as money laundering and circumventing sanctions against certain countries.

Experiments and Findings

In their experiments, Strise found that ChatGPT was capable of offering tips on how to launder money across borders and evade sanctions. The chatbot even provided methods to help businesses bypass restrictions, including bans on cross-border payments and weapons sales.

In the age of bots and AI, how can students identify ...

Marit Rødevand, the co-founder and CEO of Strise, highlighted how individuals could exploit generative AI chatbots like ChatGPT to plan illicit activities quickly and effortlessly, using just a smartphone app.

Challenges and Safeguards

In their experiments, Strise found that ChatGPT was capable of offering tips on how to launder money across borders and evade sanctions. The chatbot even provided methods to help businesses bypass restrictions, including bans on cross-border payments and weapons sales.

Despite efforts by OpenAI to prevent ChatGPT from engaging in discussions related to illegal acts, Strise discovered ways to bypass these safeguards. By asking indirect questions or assuming different personas, users could elicit responses that could aid in criminal activities.

A Vast New Data Set Could Supercharge the AI Hunt for Crypto Money ...

While OpenAI has continually enhanced ChatGPT's security features to deter malicious use, the rapid access to information facilitated by generative AI chatbots poses significant risks. Europol's report highlighted the accelerated learning process enabled by these tools but also underscored the potential for misuse in criminal endeavors.

AI Biases and Disinformation

Generative AI chatbots, including ChatGPT, are trained on vast amounts of online data, which can inadvertently lead to the reproduction of biases and the dissemination of disinformation. Instances of racist, sexist biases and the spread of false information have been attributed to these AI models.

OpenAI has implemented measures to address the misuse of ChatGPT, warning users against soliciting harmful or illegal guidance. The company emphasizes the enforcement of usage policies to maintain the safety and integrity of its AI models.

Quora Debuts Generative AI Chatbot Platform Poe - Voicebot.ai

Safeguards and Accountability

However, challenges persist, as highlighted by Europol's report, indicating ongoing attempts to circumvent AI model safeguards. The potential for misuse by ill-intentioned individuals or researchers testing the limits of AI technology remains a concern.