INTRODUCTION

OpenAI’s lawyers are having a busy year. The darling of the AI world is already fending off a slew of copyright infringement lawsuits, and dealing with a Federal Trade Commission inquiry. But now, OpenAI has a bigger problem: they’ve ticked off Scarlett Johansson.

If you haven’t seen the critically acclaimed 2013 Spike Jonze film, “Her”, now would be a good time to do so, just as November 2022 was thetime to watch Alex Garland’s “Ex Machina”.

Herfeatured Scarlett Johansson as the voice of an AI-driven, Siri-like operating system (called Samantha), which becomes increasingly human-like over the course of the movie as it approaches sentience, and Joaquin Phoenix as the lonely writer who falls in love with Samantha.

In a classic ‘life-imitates-art’ turn of events, Open AI’s latest iteration of ChatGPT, launched a few days ago, featured a Voice Mode, called “Sky”, which sounded ‘eerily similar’ to – cue them song from Suits– Scarlett Johansson.

THE WRATH OF SCARJO

Johansson did not, naturally, take kindly to OpenAI’s “Sky” ripping off her voice without consent or compensation. In her words, she was “shocked, angered and in disbelief when she realised that it generated a voice “eerily similar” to hers.

To make matters more embarrassing for Open AI, the actress disclosed that OpenAI’s CEO, Sam Altman had actually approached her in September last year to voice ChatGPT’s system 4.0, saying her voice was ‘comforting’ and ‘could potentially bridge the gap between tech companies and creatives’. Altman’s flattery notwithstanding, Johansson declined the offer, and was contacted by Altman again, two days before the launch of “Sky”, to reconsider. Before she could respond, she says, “Sky was out there.”

Johansson then retained legal counsel, who wrote two letters to Altman and OpenAI asking them to detail the exact process by which they created the “Sky” voice, following which OpenAI took down the voice.

Johansson is no stranger to high profile lawsuits. In 2021, she brought a breach of contract lawsuit against Disney over her payout for “Black Widow”, a movie that was intended for theatrical release but was ultimately dumped on Disney’s streaming platform.

She is no stranger to having her likeness misappropriated by AI tools, either. In November 2023, she took legal action against an AI app, Lisa AI, that used her name and likeness in an online advertisement without her permission. The advertisement was spotted on Oct. 28, 2023 and appears to have since vanished from the internet.

OPENAI’S DEFENCE: RELAX, IT’S JUST A COINCIDENCE

Open AI responded in a blog post saying “The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers” and that it belonged to another professional actress using her own natural speaking voice, who was cast before OpenAI approached Johansson. and that the company “carefully selected” each of the voices over the course of a five-month process.

OpenAI’s shifty arguments are unconvincing. And it does not help their defence that Altman acknowledged in a speech last year how he and other OpenAI executives were inspired by “Her,” and that OpenAI employees tweeted references to the movie on social media platform X after the “Sky” launch, and that Altman himself tweeted one word: “her.”

OpenAI has already been accused by media companies and authors of using their materials without consent or compensation and this dispute with Johansson only augments the perception that it is a company that has little regard for creatives or their rights.

PUBLICITY RIGHTS IN USA

If things had gone differently, and Johansson had sued, what might the legal basis of her claim have been? The alleged unauthorized use of her voice, as a celebrity, is arguably a violation of her publicity rights. As we’ve covered in our earlier article, publicity rights refer to an individual right to prevent commercial exploitation of their name, identity, voice, and other personality traits. Publicity rights are protected under statute and under common law principles.

In the United States, there is no federal law recognising publicity rights, but legislation has been proposed, including the “No AI Fake Representations and Unauthorized Duplications Act (NO AI FRAUD ACT)” which aims to protect individuals against deepfake technologies that duplicate their voice, image and likeness; and the “Nurture Originals, Foster Art and Keep Entertainment Safe (NO FAKES ACT)” which seeks to protect the voice and visual likeness of individuals from unauthorized AI recreations.

Meanwhile, several states such as Nevada, Texas, and Oklahoma, have legal provisions to prevent misuse of personality traits, and states such as New York and Louisiana have implemented laws which have the potential to deal with digital replication. Legislation is also pending in certain states: California has proposed a bill to deal with digital replicas and Tennessee has proposed the ‘Ensuring Likeness Voice and Image Security Act (Elvis)’ to protect artists from unauthorised use of their voices by AI.

CONCLUSION

The unauthorised re-creation of a celebrity’s voice or likeness has become a hot button issue in the age of AI: we’ve recently seen, among others, deepfakes of Tom Hanks, Oprah Winfrey and Piers Morgan in advertisements without their consent, and Taylor Swift in sexually explicit photos. A Channel 4 News analysis of the five most visited deepfake websites found almost 4,000 famous victims of deepfake pornography. There’s even a TikTok account dedicated to Tom Cruise deepfakes.

Unauthorised use of an actor’s likeness was also one of the key points in the SAG-AFTRA Hollywood strike. SAG-AFTRA has unsurprisingly released a statement backing Johansson.

ScarJo’s prompt legal action has clearly resonated with the creative industry. As journalist Casey Newton wrote in Platformer, “Johansson is one of the world’s most famous actresses, and she speaks for an entire class of creatives who are now wrestling with the fact that automated systems have begun to erode the value of their work. OpenAI’s decision to usurp her voice for its own purposes will now get wide and justified attention.”

While all of this has led to many calls to be wary of deepfakes, the trouble is, as venture capitalist Marc Andreessen pointed out, “Detecting deepfakes is not going to work because AI is already too good.” In other words, even if we did get legislation in place quickly enough, we might be entering an era much like the early days of online piracy, when regulators played whack-a-mole with online pirates across the internet for a decade before finally getting a handle on the issue. In other words, this will all get much worse before it gets better. Here’s to more of the real ScarJo (and her itchy legal trigger finger) and fewer AI replicas.


Authors: Jyothsna Nanda Kishore, Shantanu Mukherjee