Skip to Content

Scarlett Johansson lawyers up over ChatGPT voice that ‘shocked and angered’ her

By Brian Fung, Clare Duffy and Ramishah Maruf, CNN

New York (CNN) — Actor Scarlett Johansson said in a statement shared with CNN on Monday that she was “shocked, angered and in disbelief” that OpenAI CEO Sam Altman would use a synthetic voice released with an update to ChatGPT “so eerily similar” to hers.

The statement comes after OpenAI said it is hitting the pause button on the update after comparisons with a fictional voice assistant portrayed in the quasi-dystopian film “Her” by Johansson.

The retreat by OpenAI follows a backlash to the artificial voice, known as Sky, which critics described as being overly familiar with users and sounded as if it had emerged from a male developer’s fantasy. It was widely mocked for its flirtatious tone.

“We’ve heard questions about how we chose the voices in ChatGPT, especially Sky,” OpenAI said in a post on X Monday. “We are working to pause the use of Sky while we address them.”

Johansson said Altman offered to hire her last September to voice the ChatGPT 4.0 system. She said she declined the offer for “personal reasons.”

“Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there.”

Johansson said she hired legal counsel, and said OpenAI “reluctantly agreed” to take down the “Sky” voice after her counsel sent Altman two letters.

“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected,” Johansson wrote.

The voice in question is not derived from Johansson’s, the company said in a blog post Sunday, but instead “belongs to a different professional actress using her own natural speaking voice.”

Altman reiterated the company’s stance that “Sky” was voiced by a different actress in a statement Monday, following Johansson’s claims.

“The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” Altman said. “We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”

OpenAI said that with each of its AI voices, it tried to create “an approachable voice that inspires trust,” one that contains a “rich tone” and is “natural and easy to listen to.” The ChatGPT voice mode that used the Sky voice had not yet been widely released, but videos from the product announcement and teasers of OpenAI employees speaking with it went viral online last week.

Some who heard Sky derided it as perhaps too easy to listen to. Last week, the controversy inspired a segment on The Daily Show in which senior correspondent Desi Lydic described Sky as a “horny robot baby voice.”

“This is clearly programmed to feed dudes’ egos,” Lydic said. “You can really tell that a man built this tech.”

Even Altman appeared to acknowledge the widespread parallels users were drawing with Johansson when he posted to X on the day of the product’s announcement: “her.” Johansson said Altman used this post to insinuate “the similarity was intentional.”

“Her” is the title of the 2013 film in which Johansson portrays an artificially intelligent voice assistant with whom the protagonist, played by Joaquin Phoenix, falls in love, only to be left heartbroken when the AI admits she is also in love with hundreds of other users and later becomes inaccessible altogether.

Questions about leadership

The criticism surrounding Sky highlights broader societal concerns about the potential biases of a technology designed by tech companies largely led or funded by White men.

The announcement came after OpenAI leaders were forced to defend their safety practices over the weekend after a departing employee called the company’s priorities into question.

Jan Leike, who formerly led a team focused on long-term AI safety but left OpenAI last week along with Co-Founder and Chief Scientist Ilya Sutskever, posted a thread on X Friday claiming that “over the past years, safety culture and processes have taken a backseat to shiny products” at OpenAI. He also raised concerns that the company was not devoting enough resources to preparing for a possible future “artificial general intelligence” (AGI) that could be smarter than humans.

Altman quickly responded saying he appreciated Leike’s commitment to “safety culture” and added: “He’s right we have a lot more to do; we are committed to doing it.” The company also confirmed to CNN that in recent weeks it had begun to dissolve the team Leike led, and instead was integrating members of the team across its various research groups. A spokesperson for the company said that structure would help OpenAI better achieve its safety objectives.

OpenAI President Greg Brockman responded in a longer post on Saturday, which was signed with both his name and Altman’s, laying out the company’s approach to long-term AI safety.

“We have raised awareness of the risks and opportunities of AGI so that the world can better prepare for it,” Brockman said. “We’ve repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks.”

He added that as AI becomes smarter and more integrated with humans’ daily lives, the company is focused on having in place “a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities.”

The-CNN-Wire
™ & © 2024 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN - Money

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

KION 46 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content