Four bad AI futures arrived this week - by Brian Merchant
Over the past week or so, there’s been little need to imagine dystopian visions of what a world overrun by some of the worst-case AI scenarios might look like—the tech industry and its acolytes happily offered them up to us. And while grim portents of AI’s impacts on society and culture are not exactly uncommon, it’s somewhat rare we get to experience the full gamut of bad AI futures in such a compressed timeframe. Or at least it felt that way to me, and I tend to watch these things pretty closely—that’s the job. From stark evidence that AI is undermining a generation of students’ ability to learn in higher ed, to tech companies vowing to replace workers with AI, to a victim’s family using an AI avatar of the deceased to sway the courts, this was one of those stretches where I swear I could feel the brunt force of an unwelcome future arriving at speed, in stagger-step. That some of the unhappiest impacts of AI—the product of an industry-wide push to thrust AI products into every plane of existence—are no longer speculative, but manifesting, taking shape, and entrenching. As Harvard historian Erik Baker put it in response to one of the above stories, “Beginning to feel consumed by what I can only describe as climate anxiety but for AI.” I’m pretty numb to it by now, but even I felt that this week, too—it’s hard for any good humanist not to. There were at least four distinct bad AI futures articulated by Silicon Valley and co., by my count. So let’s break them down, one by one.
AI Avatars for Social Interaction
First, let’s pay a visit to our friend Mark Zuckerberg, who is making the press rounds touting Meta’s AI plans. The CEO proclaimed that the average American only has three friends, but wants fifteen, and that AI-generated avatars can make up the difference. Here’s the Wall Street Journal, reporting on Zuck’s media blitz: “The average American I think has, it’s fewer than three friends, three people they’d consider friends, and the average person has demand for meaningfully more, I think it’s like 15 friends,” he said in the interview with podcaster Dwarkesh Patel. On a separate podcast with media analyst Ben Thompson, Zuckerberg continued: “For people who don’t have a person who’s a therapist, I think everyone will have an AI.”…The Meta CEO is now throwing resources at AI chatbots—both in its social media apps and in its hardware devices. Meta AI, as it is called, is accessible via Instagram and Facebook, as a stand-alone app and on Meta’s Ray-Ban smart glasses. Zuckerberg said Tuesday that nearly a billion people are using the feature now monthly. Zuckerberg said personalized AI isn’t just about knowing basic information about a user, it’s about ensuring chatbots behave as a good friend might. “You have a deep understanding of what’s going on in this person’s life,” he said. Set aside for a second how deeply pathetic this imagined future is, and what it says about the man advocating for it, and consider that this is what Meta is actively trying to build. A digital ecosystem designed to maximize users’ time on platform, populated by AI-generated “friends” spun out of the data a user has fed that platform over time about his life and preferences. Closer to the opposite of helping folks find actual companionship, it’s a recipe for more isolation and loneliness—which is exactly the point, as my good friend Paris Marx points out in his newsletter, since the goal is to get these AI friends to help Zuck sell more targeted advertising. There is, without a doubt, a loneliness epidemic afflicting millions. And I do not blame anyone that may find relief from it in an AI chatbot—but let’s be clear that tech companies like Meta, with its array of addictive and youth-targeting social media networks, created the very conditions that gave rise to said epidemic. It built and sold this very future, of online connection over all else! Meta now hopes to sell a solution to the modern friendship problem via more personalized digital products that, surprise, further exacerbate the loneliness plague while staking out new revenue streams for the company. And just think for a second about what this future actually looks and feels like, position yourself in front of your monitor, or on your phone late at night, in a Facebook Messenger group chat with, oh, eight of your fifteen AI friends, a medley of auto-generated texts filling the screen, reversions to the mean of human banter, human gossip, human flirting, ignoring the ads, and intuiting on some level that everything being said has been said before, by a person. This is the frontrunner for the future of AI-infused social media, and it is bleak.
Cheating with AI in Education
The truly viral AI story of the week was the New York Mag piece, “Everyone Is Cheating Their Way Through College,” by James D. Walsh, whose subhed argues that “ChatGPT has unraveled the entire academic project.” The story treads ground that has certainly been trod before, but in nabbing some truly excellent and chilling quotes from students who are now so inured to using ChatGPT to cheat that they apparently do not hesitate to talk to reporters about it, it ups the moral panic quotient by a good order of magnitude or two. And don’t get me wrong, some of that moral panic is well-deserved.
AI Avatars in the Legal System
One thing that might happen if your critical thinking skills have eroded is that you might watch an AI-generated video in which a recreation of a man who has died issues a message from the grave, and believe that it had any bearing on reality. Alas, this is exactly what a judge in Arizona did. Here’s the background, courtesy of 404 Media: An AI avatar made to look and sound like the likeness of a man who was killed in a road rage incident addressed the court and the man who killed him: “To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances,” the AI avatar of Christopher Pelkey said. “In another life we probably could have been friends. I believe in forgiveness and a God who forgives. I still do.” It was the first time the AI avatar of a victim—in this case, a dead man—has ever addressed a court, and it raises many questions about the use of this type of technology in future court proceedings. The avatar was made by Pelkey’s sister, Stacey Wales.