ChatGPT user in China detained for creating and spreading fake news about a nonexistent train crash
Chinese police have detained a ChatGPT user for allegedly creating and spreading a fake news story using the AI-powered chatbot. The story involved a nonexistent train crash that reportedly caused the deaths of nine construction workers in Gansu province. According to the police report, the user, identified only as Hong, used ChatGPT to create the false news article. Within a short period of time, twenty-one accounts owned by a media company based in southern China spread the fabricated story. The police report did not explain how Hong managed to use ChatGPT.
Although most foreign websites and applications, including ChatGPT, are technically unavailable in China due to the country’s “Great Firewall,” determined individuals can gain access using “virtual private network” software that bypasses the firewall.
By the time Gansu security officials realized the article was fake, it had already received 15,000 views. Police subsequently raided Hong's residence, collected evidence, and then took "criminal coercive measures" against him. The new Chinese deepfake law, which took effect on Jan. 10, prohibits several categories of fake media produced by “deep synthesis technologies” such as machine learning and virtual reality, but offers only vague definitions for many of these forbidden classes.
The law bans deepfakes used in activities that endanger national security, harm the nation's image or societal public interest, or disturb “economic or social order.” It specifically prohibits the use of such technologies to produce, publish or transmit fake news.
The recent case involving Hong highlights the need for countries around the world to draw up legislation for artificial intelligence use, including regulations in the European Union. As artificial intelligence continues to evolve, it is crucial to establish rules and guidelines to prevent the spread of fake news and other harmful activities.