Guarding Against AI-driven Cybercrimes: The Battle for Cybersecurity

Published On Sun Sep 22 2024
Guarding Against AI-driven Cybercrimes: The Battle for Cybersecurity

The dark side of AI democratization: You no longer need to ...

Generative AI promises a future where you no longer need to be a skilled writer to draft a story or a trained software engineer to code. But there’s a dark side to this democratization: AI is enabling people with little technological know-how to become cybercriminals.

AI-powered Cybercrimes

I’m a cybersecurity researcher who monitors the darknet — the shadowy area of the Internet where people can buy illegal goods such as guns, drugs, and child pornography. Recently, I’ve noticed a worrying trend: People are selling increasingly powerful, AI-driven hacking tools with the potential to cause enormous damage.

Evolution of AI in Cybercrime: 5 Ways AI Helps Cybercriminals

Novices with little hacking experience can now use AI-generated phishing content, malware, and more to target everything from individual bank accounts to power plants. Easier access to hacking tools is especially dangerous, as more physical devices and systems, from cars to toothbrushes to the electric grid, are connected to the Internet and open themselves up to attacks.

The “Flipper Zero,” a small device anyone can use to hack traffic lights, is an early example of the threat that amateur hackers can pose to physical systems.

The Benefits and Risks of AI Democratization

The democratization of AI, including through open-source platforms, has major benefits. When anyone can experiment with the technology, it enables entrepreneurship and innovation and prevents monopolization by big tech companies. At the same time, open AI models can be bootstrapped for nefarious purposes.

Artificial Intelligence-Powered Cybersecurity: Staying Ahead of Evolving Threats

Rather than cage AI, we can fight back by deploying advanced AI cybersecurity tools and updating our defensive strategies to better monitor hacking communities on the darknet.

Companies like Google, OpenAI, and Microsoft put guardrails on their products to ensure AI isn’t used to hack, produce explicit content, guide the creation of weapons, or engage in other illegal behavior. Yet the proliferation of hacking resources, sexual deepfakes, and other illicit content made using AI suggests that bad actors are still finding ways to cause harm.

Challenges and Solutions

One path hackers use makes indirect queries to large language models such as ChatGPT that bypass safeguards. A hacker may disguise a request for content in a way that the AI fails to recognize as malicious, leading the system to produce phishing materials or violent content.

Hackers can also build alternative chatbots using open-source AI models — think ChatGPT but without guardrails. FraudGPT and WormGPT craft convincing phishing emails and give advice about hacking techniques. Some people are using jerry-rigged large language models to generate deepfakes of child pornography. This is just the start.

AI-Driven Cybersecurity Solutions: The Next Frontier of Protection

AI in Cybersecurity

Our best weapon is to fight fire with fire by using AI as a defensive cybersecurity tool. AI can help us continuously learn and respond to threats with greater agility. One of its greatest strengths is pattern recognition, which can be used to automate monitoring of networks and more easily identify potentially harmful activity.

We’re already seeing a wave of AI-powered cybersecurity. CloudFlare is using AI to track other AI and block bots from scraping content. Mandiant is using AI to investigate cybersecurity incidents. IBM deploys AI to accelerate threat detection and mitigation.

Conclusion

We shouldn’t wall off access to generative AI and all the incredible things it can do. But we must use AI strategically to stay one step ahead of the threats it will inevitably bring.

To ensure AI cybersecurity can adapt to global threats, we must invest more in multilingual large language models. Currently, disproportionate resources go to developing English language models.

The CloudStrike outage earlier this year reminded us of the fragility of global cyber infrastructure. One bad update from a company with good intentions was enough to cause billions of dollars in damage, bring air travel to a halt, and crash 911 emergency services. Now imagine AI hacking tools in the hands of anyone around the world who wants to cause harm — and in an age of Internet connected products and systems where more of the things we own now contain a chip.