OpenAI's use of Scarlett Johansson-like voice in ChatGPT exposed ...
Scarlett Johansson at the Madrid premiere for her movie, "Fly Me to the Moon." (Cover Images via AP Images) If Ursula from Disney’s The Little Mermaid had artificial intelligence capabilities, she might have been able to snag Ariel’s voice without providing her with a pair of legs. But that was then, and this is now. Today, the villain is AI, and the only superpower needed to steal a voice is an audio sample, which can be fed into a neural network to imitate a voice. The generative model can then create new recordings.
OpenAI's Response to Scarlett Johansson's Concerns
In May, actress Scarlett Johansson released a statement expressing her shock when OpenAI demonstrated a voice it called “Sky” that was “eerily similar” to her own. Johansson, known for playing a virtual assistant in the 2013 film Her, revealed that OpenAI had approached her to voice ChatGPT 4.0, a request she declined. OpenAI responded to Johansson's statement, clarifying that the voice of Sky was not meant to resemble Scarlett Johansson's voice and that the voice actor was cast before any contact was made with Johansson. Despite this, shortly after the launch of ChatGPT 4.0, OpenAI removed the voice in question.
Legal Implications and Regulations Concerning AI Voices
This incident has sparked discussions around intellectual property rights, the right to publicity, and the regulation of AI. A congressional subcommittee even invited Scarlett Johansson to testify on the matter. The legal landscape around AI's usage of voices is complex, with varying state laws on the right of publicity for celebrities and public figures.
Federal bills related to deepfake technologies have been introduced over the years but have not been enacted into law. The lack of clear regulations poses challenges in cases involving AI-generated voices without consent.
Efforts to Address AI Misuse
Efforts to address the misuse of AI technologies include the Identifying Outputs of Generative Adversarial Networks Act, signed into law in 2020, which supports research on tools to detect manipulated or synthesized content. The Federal Trade Commission has also taken steps to mitigate risks from voice cloning through initiatives like the Voice Cloning Challenge.
Despite these efforts, there are concerns about the effectiveness of tools to detect AI-generated content over time. The evolving capabilities of AI raise questions about the potential misuse of deepfake technologies for fraudulent activities.
Protecting Voices in the Age of AI
While there are methods to protect voices, such as including voice rights in contracts for celebrities and using watermarking for human-generated content, challenges persist. Individuals may need to take legal action, such as issuing cease-and-desist letters, if their voices are misused by AI without consent.
As the technology continues to advance, the need for robust regulations and safeguards to prevent the misuse of AI-generated voices becomes increasingly critical.
This story was originally published in the October/November 2024 issue of the ABA Journal under the headline: “Lost in Translation? OpenAI’s move to add a Scarlett Johansson-like voice to ChatGPT exposed gaps in the law.”