SAG-AFTRA STRIKES AGAIN: A Closer Look at Hollywood Unions, Guild Contracts and AI

AI KILLED THE VIDEO GAME STAR?

A year after going on strike against the Alliance of Motion Picture and Television Producers (AMPTP), the Screen Actors Guild and the American Federation of Television and Radio Artists (SAG-AFTRA), have just declared a second strike, this time against video game companies.

The strike caps nearly two years of negotiations between the SAG-AFTRA and a group of large gaming companies, including divisions of Activision, Warner Bros, and Walt Disney Co., over the Interactive Media Agreement (IMA), the standard-form guild-negotiated contract that sets minimum terms for the hiring of video game actors. These negotiations failed to yield, in SAG-AFTRA’s view, adequate protections against the use of video game actors’ likenesses in content generated by artificial intelligence (AI). Without these protections, SAG-AFTRA argues, video game companies could train AI to replicate an actor’s voice or create a digital replica of their likeness without consent or fair compensation.

Notably, AI-related protections remain the sole issue to be resolved in IMA negotiations – the parties have already found common ground on 24 out of 25 proposals, including historic wage increases.

If this all seems like déjà vu, it’s because protections against AI-generated actor replicas was a key issue in SAG-AFTRA’s strike against the AMPTP a year ago, at the end of which the union did secure certain AI-related protections for its members.

 

A STRIKINGLY CONTENTIOUS MATTER

In early 2023, Hollywood ground to a halt as the Writers Guild of America (WGA) and SAG-AFTRA both went on strike, in quick succession. The strikes arose from the unions’ failure to reach an agreement with the studios on various issues, in the context of negotiations to renew their respective template guild contracts, which set minimum terms between union members and the studios.

Notably, AI – and the guardrails necessary to ensure it did not replace union members – was a key talking point in both the WGA and SAG-AFTRA strikes, and even came up in the guild contract between studios and the Directors Guild of America (DGA), the guild that did not strike. When the strikes finally ended and the new guild-industry contracts were signed, each guild had addressed its AI-related concerns slightly differently.

The DGA Agreement

The DGA agreed with the Association of Independent Commercial Producers (AICP) that AI would not replace the duties of directors, assistant directors, or unit production managers. AI tools were prevented from performing tasks traditionally carried out by these roles, safeguarding DGA members and ensuring that AI serves as an assisting tool rather than a replacement in production.

The WGA Agreement

The WGA strike ended with the WGA and the AMPTP entering into a Minimum Basic Agreement (MBA) which established comprehensive regulations for the use of generative AI. Under the MBA, AI is designated as a tool, not a writer, ensuring AI-produced material does not receive literary credit, thus protecting writers’ creative and financial rights. Writers have the autonomy to choose whether to use AI, and studios cannot compel them to do so. While studios can provide AI-generated drafts, these are not considered “source material,” guaranteeing writers full credit and compensation. Studios may set their own AI policies and reject AI use that undermines copyright. The contract allows for future negotiations on using copyrighted material to train AI models, but, as we discuss below, does not settle this highly contentious issue.

The SAG-AFTRA Agreement

SAG-AFTRA’s agreement with AMPTP addresses concerns about AI replacing artists and unauthorized use of their likenesses by establishing clear terms for remuneration and consent for digital replicas. Members will receive compensation for the use of their digital replicas, ensuring their likenesses are not exploited without fair payment. These measures are aimed to ensure artists retain control over their digital likenesses and receive fair compensation instead of simply being replaced by AI-generated performances.

 

UNRESOLVED ISSUES

While these strikes brought to the fore, and in some cases addressed, the many fears creatives have over the increasing use of AI in Hollywood, many issues remain unresolved. Specifically, the WGA contract does not address the matter of training AI models on pre-existing material, a point that triggered the WGA strike.

As Ronin Legal has covered before there are a slew of cases being heard in various US courts right now that relate to the question of whether copyrighted works can be used for training AI. Several lawsuits, including a November 2022 class action against OpenAI, GitHub, and Microsoft, allege that AI companies use developers’ code without due consent to train their AI models. Similarly, several artists have jointly sued StabilityAI and Midjourney, claiming unauthorized use of their works to train the AI models of these companies.

While the plaintiffs in these cases argue that using copyrighted material to train AI models violates copyright laws, courts appear unconvinced that these matters present straightforward examples of copyright infringement. For one, judges have questioned the “substantial similarity” between original artworks and AI-generated images that have allegedly been trained on them, highlighting the challenges of identifying specific copyrighted images that have been used to train an allegedly infringing AI model.

Further, using copyrighted works to train AI large language models might fall under exceptions to copyright infringement, such as text data mining (TDM). In the case of Authors Guild v. Google Inc., the court deemed Google’s use of large datasets of research literature and uploading limited snippets as fair use, as it did not serve as a substitute for the actual text of the copyrighted work.

 

SAG-AFTRA’S INDEPENDENT INTERACTIVE MEDIA AGREEMENT

Even as tense negotiations were ongoing with the larger video game companies over the IMA, SAG-AFTRA did manage to agree to a separate standard-form contract in February 2024 (called “the tiered-budget independent interactive media agreement” or “I-IMA”) with independent video game developers, for lower-budget video game projects. The I-IMA contains several AI-related protections and is therefore a useful indicator of the sort of terms SAG-AFTRA is likely seeking from the larger video game companies in the IMA.

Digital Replicas

Article 16 of the I-IMA mandates that employers notify performers at the earliest stage, either during the audition or job offer, if a digital replica will be created. Performers must give informed consent, which includes understanding the scope of the replica’s use, and all time spent creating it is considered work time. For any usage of a digital replica, informed consent and negotiated compensation are required. This includes consent from the performer’s estate if they are deceased. Compensation varies based on whether the digital replica is off-camera or on-camera, with specific rates detailed for different scenarios. Upon public release, employers must provide a detailed usage report of the digital replica.

Generative AI

The agreement defines Generative AI as systems that learn patterns from data to generate content, differentiating it from traditional AI used for specific functions like character animation. Employers must notify SAG-AFTRA if GAI systems are used to generate material that would replace human work. Additionally, if GAI is used to create content by prompting with a performer’s name or a character associated with them, the employer must obtain consent and negotiate compensation at no less than the scale minimum. This ensures that performers’ rights and employment are protected against the use of AI-generated content.

 

ALL OUT WAR

The war over AI’s disruptive role in the media and entertainment industries has been raging on multiple fronts for well over 3 years now. We’ve covered the numerous copyright infringement cases filed against AI companies by legacy media companies and artists, discussed how AI-generated deepfakes violate publicity rights, including in the context of Scarlett Johansson’s recent allegations that OpenAI made unauthorized use of her voice, and examined how, in most jurisdictions, IP protection is unavailable to purely AI-generated creations. For example, the US Copyright Office requires that AI-generated works have significant human contribution to qualify for copyright protection. In an echo of this regulatory position, the Recording Academy has declared AI-generated music to be ineligible for Grammy nominations.

Authors: Alan Baiju, Shantanu Mukherjee, Shruti Gupta