How ChatGPT Can Be Used for Deception Cybersecurity
ChatGPT, a generative AI tool, has the potential to change the way we perceive deception cybersecurity. While hackers have found ways to jailbreak the tools and generate malicious code, security vendors and researchers have begun experimenting with generative AI's detection capabilities. In fact, one security researcher, Xavier Bellekens, CEO of deception-as-a-service provider Lupovis, has integrated ChatGPT in his deception strategy.
Using ChatGPT for Deception
Bellekens used ChatGPT to create a honeypot, which is a decoy infrastructure that misleads attackers while giving security teams insights into the exploitation techniques that threat actors use to gain access to the environment. In his experiment, Bellekens asked ChatGPT to provide instructions and code for building a medium interaction printer that would support all of the printer's functions and respond to scans. He also added a login page with a username "admin" and a password "password."
In about ten minutes, Bellekens created a decoy printer that functioned relatively well. Next, he hosted the printer on Vultr, using ChatGPT to log incoming connections and send them to a database. The newly created printer started gaining interest almost immediately. Bellekens cross-referenced connecting IP addresses with a Lupovis tool called Prowl to better analyze the connections.
Interestingly, Bellekens found that an individual had logged into the printer at one point, which warranted a closer investigation. The individual logged in without brute force, which meant that one of the scanners had worked. "They went to click on a couple of buttons to change some of the settings in there. So that was actually quite quick to see that they got fooled by a ChatGPT decoy," said Bellekens in an exclusive interview with VentureBeat.
Deception Cybersecurity Strategy
Deception cybersecurity is a popular threat detection technique that tricks attackers by using honeypots, or fake assets. Using generative AI to create decoys at scale could be a powerful force multiplier in obscuring potential entry points and hardening defense against threat actors. As AI-driven solutions and tools like ChatGPT continue to evolve, organizations will have valuable opportunities to experiment with deception cybersecurity and go on the offensive against threat actors.
It's important to note that AI is changing deception cybersecurity, leading towards what Gartner calls an automated moving-target defense (AMTD) strategy. Essentially, this strategy involves using automation to move or change the attack surface in real-time. By 2025, 25% of cloud applications are expected to leverage AMTD features and concepts as part of built-in prevention approaches, according to Gartner.
As the market for deception technology continues to grow, organizations will have more resources than ever before to defend themselves against advanced persistent threats.
If you want to learn more about the transformative enterprise technology and transact, check out VentureBeat's briefings.