Navigating the Hazards of AI Misuse: Insights and Precautions | interest.co.nz

Published On Mon Jul 08 2024
Navigating the Hazards of AI Misuse: Insights and Precautions | interest.co.nz

Is AI crashing into a wall of disappointment? | interest.co.nz

As the current artificial intelligence wave is cresting, negativity is setting in as scepticism over what the “change everything” technology will actually deliver grows stronger. Even if you adjust for humanity’s understandable bias for thinking the worst, just in case it comes true, the amount of Cassandras warning that AI is overblown and even unsafe easily outweighs the positive takes.

Some of the caution comes from unexpected quarters, like Google, the tech giant that has gone in boots and all with AI at every level, from handsets to cloud services, to shore up its business fortunes.

Google’s DeepMind Research

Researchers at Google’s DeepMind have published a thought-provoking paper which describes real-world misuses of generative AI (GenAI); this is the technology that can create text, digital images, videos and audio.

That’s because GenAI’s large language models (LLMs) are trained on massive amounts of human generated data that pattern-recognition algorithms running on powerful computer systems assemble into realistic renditions that can be eerily indistinguishable from what humans are capable of producing. This is the interactive form of AI that many people are exposed to through chatbots like Google Gemini, OpenAI ChatGPT, Microsoft Copilot and Anthropic Claude.

Generative AI Risks & Concerns

To nobody’s surprise, such advanced digital plagiarism of human works and characteristics have opened up AI to vast possibilities for abuse, as the Google researchers’ paper Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data shows.

AI Misuse and Risks

The paper describes a plethora of tactics for impersonating real people and their work for malicious purposes online with AI, and it’s a must-read for anyone trying to understand where we are headed with the technology.

What stands out is that the abuse does not come from clever prompt hacking to get past AI guardrails, but by simply using the systems’ easily accessible capabilities, the DeepMind researchers note.

That is, it entails using AI as intended, with minimal technical expertise. It enables malicious people to engage in online fraud, sockpuppeting for amplification to manipulate opinion or falsely inflate popularity, impersonate celebrities for fake ads, creating bogus websites to trick people into downloading malware, sharpening up phishing campaigns and much more.

Anyone working in the information security field will "head desk" while reading the Google DeepMind paper, wondering how they’ll be able to defend against machine generated attacks that will be launched at scale as threat actors adopt AI.

Unraveling the Deepfake Creation Process

Misuse of AI is definitely something that publicly traded tech companies such as Facebook’s parent company Meta, Microsoft, Google, Oracle and others are aware of. To the point that they have started adding the AI threat scenarios to the risk sections in their mandatory investment disclosure documents.

Fun disclosure from Google: “Unintended consequences, uses or customization of our AI tools and systems may negatively affect human rights, privacy, employment or other social concerns." https://t.co/lyjWJjHLE6

Sometimes it’s not third party threat actors that pose the AI risk, but the organisations building the technology themselves. Germany’s Large-Scale Artificial Intelligence Network - LAION - is a non-profit that has assembled some of the most popular free datasets in the world, sponsored by AI companies such as Hugging Face and Stability AI.

LAION says its datasets like the 5.85 billion image-text pair 5B one “are simply indexes to the Internet” which link to pictures. United States-based Human Rights Watch took a closer look and found that the dataset led to personal photos of Australian children being scraped off the web, and used to train AI models.

HRW found that the LAION-5B dataset “contains links to identifiable photos of Australian children. Some children’s names are listed in the accompanying caption or the URL where the image is stored. In many cases, their identities are easily traceable, including information on when and where the child was at the time their photo was taken.”

Such datasets could be misused by other tools to create deepfakes, putting children at risk, HRW technology researcher Hye Jung Han pointed out.

AI Challenges and Concerns

AI requires a constant stream of new data to update the models that generate content, like a zombie insatiably hunting for fresh brains. Tech companies think that means they can just take it and then sell it back to us through GenAI service subscriptions via their proprietary billion-dollar cloud AI systems.

Deepfaking it: America's 2024 election collides with AI boom

Recently, Microsoft’s head of AI, Mustafa Suleyman created a furore by claiming web content is “freeware” that companies can help themselves to. Needless to say, Suleyman might need to take a refresher course in intellectual property law and precedents before he broaches the AI data gathering subject again.