10 Ways to Combat AI-Generated Disinformation

Published On Thu Nov 07 2024
10 Ways to Combat AI-Generated Disinformation

Latest

AI

Amazon

Apps

Biotech & Health

Climate

Cloud Computing

Commerce

Crypto

Enterprise

EVs

Fintech

Fundraising

Gadgets

Gaming

Google

Government & Policy

Hardware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

Security

Social

Space

Startups

TikTok

Transportation

Venture

Events

Startup Battlefield

StrictlyVC

Newsletters

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

This Week in AI: It's shockingly easy to make a Kamala Harris Deepfake

It was shockingly easy to create a convincing Kamala Harris audio deepfake on Election Day. It cost me $5 and took less than two minutes, illustrating how cheap, ubiquitous generative AI has opened the floodgates to disinformation.

Creating a Harris deepfake wasn’t my original intent. I was playing around with Cartesia’s Voice Changer, a model that transforms your voice into a different voice while preserving the original’s prosody. That second voice can be a “clone” of another person’s — Cartesia will create a digital voice double from any 10-second recording.

One Tech Tip: How to spot AI-generated deepfake images

So, I wondered, would Voice Changer transform my voice into Harris’? I paid $5 to unlock Cartesia’s voice cloning feature, created a clone of Harris’ voice using recent campaign speeches, and selected that clone as the output in Voice Changer. It worked like a charm: I’m confident that Cartesia didn’t exactly intend for its tools to be used in this way. To enable voice cloning, Cartesia requires that you check a box indicating that you won’t generate anything harmful or illegal and that you consent to your speech recordings being cloned.

Possible Solutions

That’s a problem, it goes without saying. So what’s the solution? Is there one? Cartesia can implement voice verification, as some other platforms have done. But by the time it does, chances are a new, unfettered voice cloning tool will have emerged.

Researchers analyze the characteristics of AI-generated deepfakes

I spoke about this very issue with experts at TC’s Disrupt conference last week. Some were supportive of the idea of invisible watermarks so that it’s easier to tell whether content has been AI-generated. Others pointed to content moderation laws such as the Online Safety Act in the U.K., which they argued might help stem the tide of disinformation. Call me a pessimist, but I think those ships have sailed. We’re looking at, as CEO of the Center for Countering Digital Hate Imran Ahmed put it, a “perpetual bulls— machine.”

Increasing Disinformation

Disinformation is spreading at an alarming rate. Some high-profile examples from the past year include a bot network on X targeting U.S. federal elections and a voicemail deepfake of President Joe Biden discouraging New Hampshire residents from voting. But U.S. voters and tech-savvy people aren’t the targets of most of this content, according to True Media.org’s analysis, so we tend to underestimate its presence elsewhere.

The volume of AI-generated deepfakes grew 900% between 2019 and 2020, according to data from the World Economic Forum.

How Deepfake Videos Are Used to Spread Disinformation - The New ...

Meanwhile, there’s relatively few deepfake-targeting laws on the books. And deepfake detection is poised to become a never-ending arms race. Some tools inevitably won’t opt to use safety measures such as watermarking, or will be deployed with expressly malicious applications in mind.

Short of a sea change, I think the best we can do is be intensely skeptical of what’s out there — particularly viral content. It’s not as easy as it once was to tell truth from fiction online. But we’re still in control of what we share versus what we don’t. And that’s much more impactful than it might seem.

Other Notable AI News This Week

ChatGPT Search review: My colleague Max took OpenAI’s new search integration for ChatGPT, ChatGPT Search, for a spin. He found it to be impressive in some ways, but unreliable for short queries containing just a few words.

Amazon drones in Phoenix: A few months after ending its drone-based delivery program, Prime Air, in California, Amazon says that it’s begun making deliveries to select customers via drone in Phoenix, Arizona.

Ex-Meta AR lead joins OpenAI: The former head of Meta’s AR glasses efforts, including Orion, announced on Monday she’s joining OpenAI to lead robotics and consumer hardware. The news comes after OpenAI hired the co-founder of X (formerly Twitter) challenger Pebble.

AI-generated recaps: Amazon has launched “X-Ray Recaps,” a generative AI-powered feature that creates concise summaries of entire TV seasons, individual episodes, and even parts of episodes.

Anthropic hikes Haiku prices: Anthropic’s newest AI model has arrived: Claude 3.5 Haiku. But it’s pricier than the last generation, and unlike Anthropic’s other models, it can’t analyze images, graphs, or diagrams just yet.

Apple acquires Pixelmator: AI-powered image editor Pixelmator announced on Friday that it’s being acquired by Apple. The deal comes as Apple has grown more aggressive about integrating AI into its imaging apps.

An ‘agentic’ Alexa: Amazon CEO Andy Jassy last week hinted at an improved “agentic” version of the company’s Alexa assistant — one that could take actions on a user’s behalf. The revamped Alexa has reportedly faced delays and technical setbacks, and might not launch until sometime in 2025.

AI Facing Challenges

Pop-ups on the web can fool AI, too — not just grandparents. In a new paper, researchers from Georgia Tech, the University of Hong Kong, and Stanford show that AI “agents” — AI models that can complete tasks — can be hijacked by “adversarial pop-ups” that instruct the models to do things like download malicious file extensions.