The Creative Clash: Actors' Strike Finishes While AI Remains
Actors are negotiating protections, as studios continue to use AI technology to create synthetic characters, often trained on their training data or designed to imitate the likeness of a performer.
In a letter last month, I covered the negotiations with the Writers Guild and the studio networks, which saw many protections for writers’ credit and ownership.
With acting, however, there are many unique concerns. Any material produced is personal: it’s their face, emotion and idiosyncrasies. While a robot may replace a factory worker, many artists and actors are going to be put into a position where they have to at least deal with AI in their creative process.
There are many ways AI could map a human performer to some new content, or to use their training data to make a synthetic version of them without their consent. This is highly desirable for large studios when you consider the savings. They will weigh up paying an actor for a full two months versus a single two hour recording session, wherein an AI would be trained on their voice or face. Studios have already used AI to speed up production and reduced costs by using AI-generated animation, voice acting and virtual speakers.
There is a culture in Hollywood, and most of the media empire, that could exacerbate the predatory use of AI in creative industries. It’s cutthroat in the hands of studio executives who care neither for the quality of the product nor the livelihoods of the people they employ (and will continue to employ). While the entertainment media empire is full of examples of innovation, we risk seeing innovation as a solution in itself.
Many actors fear the studios using generated replicas of them, undercutting their rightful compensation and credits. These "digital twins" threaten to reduce actors to mere templates for AI, cutting both compensation and recognition. It also jeopardizes their job or contract security. One interesting problem is raised when a studio wishes to cut a human actor out of a project after it has already started, for any number of reasons. The strike isn't merely about AI– it’s a manifestation of a deeper discontent brewing in the heart of Hollywood. There was an instance in 2012 where a digital effigy of Audrey Hepburn was resurrected for a chocolate commercial. Hollywood has witnessed significant technological shifts in the past, with the rise of digital effects and computer-generated imagery (CGI) altering labor dynamics and skill requirements. Now the technology is far more advanced.
Last month, the entertainment industry witnessed a historic labor dispute, marking the longest strike in the history of the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA). The union, representing a diverse array of media professionals, engaged in a prolonged standoff with the Alliance of Motion Picture and Television Producers (AMPTP). There has been growing concern over the use of AI in media production, raising concerns of replacing actors on set. The combined impact of these strikes led to an estimated $6.5 billion loss to Southern California's economy and the loss of 45,000 jobs.
The use of AI in film and television production in general has stirred significant debate. For instance, the potential for AI to fill in for writers or create digital replicas of actors raises questions about artists' rights and the ethical implications of such technology. These concerns came to the fore during the recent Writers Guild of America (WGA) strike. The strike, primarily triggered by compensation issues due to streaming services, soon pivoted to the pivotal role of AI in the industry. From early May to late September this year, the Hollywood writers' strike lasted for 148 days, protesting compensation issues related to the rise of streaming services.
The strikes this year highlighted growing concerns across creative industries regarding AI's potential to replace human labor, particularly in highly skilled and creative fields. Besides writers, actors, and other performance artists are also exploring regulations for AI use, such as simulating actor performances or digitally editing filmed expressions. Great risk is incurred when large, hypercapitalist corporations use AI against artists’ best interests. This rather predatory practice, uncaring of the talent, has been a staple of Hollywood since its inception.
The infusion of computer technology in Hollywood dates back to the late 20th century, with early forms of digital effects and computer-generated imagery (CGI). This initial phase signaled a shift towards embracing technological advancements in filmmaking. The 90s and early 2000s saw a significant leap with movies like "Toy Story" (1995) pioneering digital animation, and "The Matrix" (1999) showcasing advanced visual effects.
The issue hits harder for lesser-known actors. In a bid for exposure, some might agree to have their likeness captured and stored, a digital asset for studios to exploit indefinitely. This scenario is a play within a play, showcasing the struggle of lesser-known actors in the broader theater of Hollywood. There have also been some related court rulings, on the topic of music samples being used. In the case of Ludlow Music Inc vs. Williams (2000), a two-line lyrical sample from the song ‘I’m The Way’ was used in somebody else’s work.
This minimal sampling led to a dispute, with a judgment acknowledging the extent of copying as substantial, although just barely. This case underscores the vague boundaries surrounding the legality of sampling, even when it involves a seemingly negligible portion of the original work. There are ongoing lawsuits and copyright rulings on this topic. Last year I wrote a fairly detailed piece exploring the issue of AI and copyright; however, there are many case studies and developments since then. If you’d like a run down of AI and copyright concerns, you can check it out while I develop a more up-to-date letter on the issue.
In the end, after months of negotiations, a tentative deal was reached, concluding the strike. The resolution of this dispute marked a significant moment in the history of Hollywood labor relations, reflecting both the challenges and opportunities posed by technological advancements in the media landscape.
The agreement between SAG-AFTRA and the film and TV producers' alliance requires studios to get an actor's consent and provide payment for using digital versions of their performances. Set to last three years after SAG-AFTRA approves it, the deal covers AI-generated roles and synthetic performances. The negotiations, lasting 118 days, intensely focused on the use of AI.
Studios must pay actors if their performances help train AI. They need the actor's okay to use their digital likeness, whether from scans or old footage, and must pay them as if they were acting in person. Actors can say no to such use. If digital versions are used in background roles, studios still owe compensation, especially if these roles are altered to include speech. For deceased actors, studios must get permission from their estates. Creating a 'synthetic performer' from different actors requires consent and payment for each element used. This agreement follows a similar deal with writers but doesn't cover voice or motion actors in video games or TV animation. It could influence global standards in the arts, with similar actions considered by unions in Europe and Canada.
Much of the impetus behind the strikes, for actors and writers alike, arose due to the eminence of streaming. These platforms, with their subscription-based model, have turned shows and films into lures for potential subscribers, challenging the traditional metrics used to evaluate and compensate the creative minds behind them. The debate around residuals is crucial, as the streamers hold their viewership numbers close to the chest. There was a cutthroat proposal by the AMPTP, which suggested a one-time compensation for background performers in exchange for eternal rights to their digital likeness, angering those on the side of the workers. The wealth disparity within the acting community further complicates these discussions. While headline-grabbing stars amass fortunes, a significant portion of actors live on the precipice of financial instability. Their reliance on residual payments is not just about fair compensation for their craft, but a means to secure basic necessities like health coverage.
Actors have also been licensing their voices for AI models. The actor Val Kilmer had his ‘voice’ recreated through Sonantic's deepfake technology, allowing him to ‘speak’ once again, after he had lost it to throat cancer. In another example, deepfake technology enabled David Beckham to deliver his message in nine different languages as part of a campaign against Malaria. There are many benefits to the use of AI in sound technology. On the one hand, actors can now receive commissions for work they never have to perform. They have a broader reach, in terms of licensing options, and could likely scale up their number of auditions and projects. The rights over their voice samples are reminiscent of the music sampling in the previous paragraph. You can read a previous letter exploring this issue in more detail here.
There is also the issue on the side of the audience. As viewers of the synthetic medium, our understanding of reality will be shaped to conform with the requirements of the medium. The limitations of the medium become our limitations. This medium processes words and images as statistical relationships, offering only a superficial reflection of the multifaceted web of real-world entities. This approach is a low-dimensional rendition, lacking the depth and richness of genuine human cognition. Human thought and societal structures are uniquely shaped by genetics, epigenetics, and cultural transmission. Our intelligence and creativity is unparalleled in its depth. Contrasting this with LLMs (large language models) underscores the vast difference between genuine human understanding and the simplified, biased yet data-driven approximations of these models.
These are vague representations of real people, with narrow, simple and functional relationships within their community. Even here, this is a simulacra of a real community. That is largely what we have left over in an ever-digitizing world: a pool of signs interjecting, over-riding and subsuming one-another, until the original reality that created them becomes obscure and unreachable. Our anthropomorphic lenses and dopamine-seeking brains will not be able to peer through the veil. A person with a unique background and identity is replaced, in the network, by a sign. In this electric age, we no longer have access to the ground reality.
Electronic media presents signs, which are used to replace the real thing they were supposed to indicate. Consider distinctions between social groups stemming from historical structures. For instance, while some cultures are deeply rooted in oral storytelling traditions, others, like the USA, are anchored in printed media. When the fruits of oral traditions are attempted to be translated into electronic media, they are distilled into signs, symbols or stereotypes. These indirect representations are easier to communicate and understand by a wide audience, but they lack the depth and richness of the original story. A complex cultural ritual might be reduced to a simple dance or a specific costume in a TV show. The sign itself may be used while the original culture, based on storytelling and other spoken word practices, dwindles into nothingness. In a previous letter, I explore this idea in some depth.
Many issues we have just stumbled upon have already been experienced by and, to a degree, dealt with in, China. Synthetic news presenters go back at least to 2018, when Chinese state news agency Xinhua and search engine Sogou introduced pioneering 2D newsbots. Their images were drawn from videos, while their motions and voices were driven by machine learning. Two years later, the broadcaster upgraded to 3D-rendered avatars produced using “multimodal recognition and synthesis, facial recognition and animation and transfer learning.” While broadcasters can use AI-generated talking heads to save time and money, propagandists can use them to gain an aura of newsy credibility.
For example, an unidentified group used Synthesia, a web service that makes AI-generated characters, to generate fake news clips from a fictional outlet called Wolf News. One clip attacked the U.S. government for failing to take action against gun violence, while another promoted cooperation between the U.S. and China. One consideration is how long it took us to establish rights over the use of our DNA. Even that battle is far from over. We still have limitations on those biological rights, despite how fundamental our cellular and genetic information is to our lives and identity.
While the industry shifts towards advanced technology, the culture will shift too. Soon, AI will be present in just about every aspect of the entertainment and media industry, just as computers became distributed worldwide in the late 1970s/early 1980s. New jobs, skills and techniques will emerge. Many of the best things we will ever hear and see will be partially or entirely produced by AI. The only hope is that human creativity doesn’t get outsourced, but rather, enriched by this new set of tools. While the end of these strikes does reflect hope and progress, there is still a long way to yet before the situations stabilizes. I hope to do my part to use AI to give creatives power, and to expose the way corporations and institutions take advantage of people and the environment.