OpenAI is discontinuing an AI voice assistant following recent scrutiny and criticism regarding its striking resemblance to Scarlett Johansson's voice.
The company stated:
“We've heard questions about how we chose the voices in ChatGPT, especially Sky. We are working to pause the use of Sky while we address them."
OpenAI States AI Voices Should Not Imitate Celebrities
Last week, OpenAI launched GPT-4o, a new AI model featuring a variety of audio voices for ChatGPT.
The company auditioned over 400 voice actors, ultimately selecting five to create the voices named Breeze, Cove, Ember, Juniper, and Sky.
To protect privacy, the identities of the voice actors were not disclosed.
Social media users quickly noticed that the 'Sky' voice closely resembled Scarlett Johansson's voice from a movie.
In a blog post on Monday, OpenAI stated that AI voices "should not deliberately mimic a celebrity's distinctive voice."
The company explained:
“Sky's voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice."
The statement resonated with earlier comments by OpenAI CTO Mira Murati, who clarified that Sky was not intentionally designed to mimic Johansson.
Joanne Jang, the company's model behavior lead, added on Monday that OpenAI was "in conversations with ScarJo's team" to address the "confusion."
Those Who Heard it During Unveiling Ridiculed it
OpenAI aimed to create voices that were "approachable and trustworthy," with a "rich tone" that is "natural and easy to listen to."
The Sky voice for ChatGPT had not yet been widely released, but clips from the product announcement and teasers featuring OpenAI employees using it went viral online last week.
Some critics found Sky's voice perhaps too easy to listen to.
The controversy even inspired a segment on The Daily Show, where senior correspondent Desi Lydic described Sky as a "horny robot baby voice."
Lydic said:
“This is clearly programmed to feed dudes' egos. You can really tell that a man built this tech."
Johansson Detailed Her Refusal
Scarlett Johansson expressed her "shock, anger, and disbelief" over how the Sky demo voice sounded "so eerily similar to mine that even my closest friends and news outlets couldn't tell the difference" in a lengthy statement.
The actress, known for voicing an AI bot in the sci-fi romance Her, revealed that OpenAI founder Sam Altman had approached her eight months ago, proposing she lend her voice to one of ChatGPT's assistants.
She shared:
“He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI. He said he felt that my voice would be comforting to people."
After considering the offer, she declined.
However, approximately nine days ago, Altman reached out again, asking her to reconsider.
Before she could respond, OpenAI released the GPT-4o demo, which included the Sky voice.
Johansson stated that her "friends, family, and the general public" immediately noticed the resemblance.
She further mentioned that Altman seemed to acknowledge the similarity by tweeting the word "her" during the update's rollout, suggesting that the resemblance was intentional.
In her statement, Johansson revealed that she had been "forced to hire legal counsel," who subsequently sent two letters to Mr. Altman and OpenAI.
The letters outlined the company's actions and requested a detailed explanation of the process used to create the 'Sky' voice.
She attributed the company's decision to remove Sky directly to her legal pressure.
Johansson, a two-time Oscar nominee, also highlighted broader concerns about disinformation and the lack of regulatory and legal safeguards surrounding artificial intelligence (AI).
She iterated:
“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected."
Protest Against AI Exploitation
In November of last year, Johansson issued a legal threat against a company accused of using her likeness in an advertisement.
The video purportedly featured images generated by Lisa AI, although the company refuted these claims.
More recently, a petition authored by the non-profit organisation Artists Rights Alliance garnered signatures from 200 artists.
The petition urges AI companies to refrain from exploiting artists' voices and likenesses.
Signatures include big names like Billie Eilish, Aerosmith, Camila Cabello, Katy Perry, Kate Hudson, Jon Bon Jovi, Imagine Dragons, Nicki Minaj, Sheryl Crow, and more.
AI's Unauthorised Use Can Lead to Potentially Catastrophic Damages
Amidst growing apprehension over the potential impact of AI on various industries, including Hollywood, Johansson's response and OpenAI's explanation shed light on broader concerns.
Recent months have seen companies like OpenAI facing legal challenges from content creators, artists, and media entities who allege unauthorised use of their material to train AI models.
This issue has been a focal point for SAG-AFTRA, particularly during Hollywood's recent strikes.
The resulting contract between the actors' union and studios includes provisions to limit AI's use in film and television, ranging from voiceovers to full-body scans.
The unauthorised use of AI, particularly without consent from creators, artists, authors, and media companies, poses significant ethical and legal concerns.
When AI models are trained using copyrighted material without permission, it raises issues of intellectual property infringement.
This not only undermines the rights of content creators but also disrupts the established frameworks for compensating them for their work.
Additionally, the unauthorised use of AI can lead to misrepresentation or manipulation of content, potentially resulting in reputational damage or misinformation.
Furthermore, it erodes trust between AI developers and the broader creative community, hindering collaboration and innovation in the field.
Overall, the misuse of AI without consent not only violates legal principles but also undermines the integrity and fairness of creative industries, highlighting the need for robust regulations and ethical guidelines to govern AI development and deployment.