The Creative Clash: AI vs Hollywood
Rights for artists and performers are something they have always had to fight for against studios, executives and now, robots.
The struggle between artists and AI is a peculiar one. Unlike the Luddite protests of the 19th century, there is something deeper at stake here. This is art: a reflection of culture, powerful movements and emotional experiences. Going further, while a robot may replace a factory worker, many artists are going to be put into a position where they have to at least deal with AI in their creative process.
What if the writer or the writing team of a television series left a project before the studio was finished with the show? Perhaps there’s demand for a new season, but the talent has already moved on to other responsibilities. AI would be able to learn from their existing writing, and even the other written work by the original artist, in order to generate something to fill in the gap. There are many ways AI can be leveraged by media studios, especially given the sheer size of the entertainment industry, the money, time and effort that goes into all of it.
All of this raises a couple of questions… what rights do artists have over their own works, when studios may not have their best interests in mind? What’s to stop the studio training AI on the writer’s work and then removing the artist from the project to save money? These questions are true not just for writing, but for all forms of media. Recently, strikes on behalf of writers and actors have led to some successful negotiations on behalf of their rights.
From early May to late September, in 2023, the Writers Guild of America (WGA) —representing 11,500 screenwriters, came together. This Hollywood writers' strike, which lasted for 148 days, arose initially from compensation issues related to the rise of streaming services. However, as negotiations between the WGA and companies like Disney and Netflix unfolded, the use of AI became a pivotal point of contention. Disney has been criticized for its use of AI in the recent show “Secret Invasion” as well as in a poster image for the second season of “Loki”.
Another incident involved Disney using a stock image that used AI without the author disclosing as having done so. They purchased from a platform that doesn't allow selling AI images. Interestingly, Disney has many employees as members of the Copyright Alliance, with some of them publicly denouncing the use of AI and anyone using it. Their corporate strategy is unclear but it appears to fit the theme of “playing both sides”.
Unions have always played a crucial role in fighting for workers' rights in Hollywood. They led major strikes in 1960, 1988, and notably in 2007. They, as well as other unions, have been representing a range of talents—from TV artists and directors to video game voice actors. Since the 1930s, unions have been the backbone of Hollywood, coinciding with the rise of "talkies." Guilds like the WGA, SAG-AFTRA, and DGA emerged, boasting members including the iconic Marilyn Monroe. The International Alliance of Theatrical Stage Employees (IATSE) was Hollywood’s first union, founded in the 1880s. It championed the unsung heroes of film - the crew and non-star cast. The Great Depression hit unions hard, but the tide turned in 1932. The Norris–La Guardia Act, signed by President Hoover, fortified labor rights, setting the stage for Hollywood's union-dominated era.
The strikes this year highlighted growing concerns across creative industries regarding AI's potential to replace human labor, particularly in highly skilled and creative fields. Besides writers, actors, and other performance artists are also exploring regulations for AI use, such as simulating actor performances or digitally editing filmed expressions. A major concern was the potential utilization of AI, such as ChatGPT, in generating story ideas or scripts, which could undermine writers' compensation and credits. The WGA proposed a ban on such AI usage in the industry. Additionally, they demanded that scripts covered under the union's collective bargaining agreement should not be used to train AI models. This reflects a broader resistance against tech companies utilizing online content to train large language models without compensating the original creators. Great risk is incurred when large, hypercapitalist corporations use AI against artists’ best interests. This rather predatory practice, uncaring of the actual outcome for the creators, has been a staple of Hollywood since its inception.
The infusion of computer technology in Hollywood dates back to the late 20th century, with early forms of digital effects and computer-generated imagery (CGI). This initial phase signaled a shift towards embracing technological advancements in filmmaking. The 90s and early 2000s saw a significant leap with movies like "Toy Story" (1995) pioneering digital animation, and "The Matrix" (1999) showcasing advanced visual effects.
This era began shifting the labor dynamics, with a growing need for digital skills over traditional methods. By 2015, the major motion picture industry was amidst a digital transformation, hinting at the ongoing evolution of technology within the industry. Around this time, machine learning and big data started being employed for predictive analytics such as box office predictions, and in streamlining processes like editing and sound design. However, the core creative aspects remained largely human-driven. The emergence of more advanced AI like generative models in the late 2010s and early 2020s began encroaching on traditionally human-centric domains like scriptwriting. This brings us to today.
The WGA signed a three-year contract with Alliance of Motion Pictures and Television Producers (AMPTP), ending the strike. Both sides were able to get some of what they wanted from the negotiation, although there are still many gaps and questions remaining. One grey area is caused by the fact both parties retain the ability to assert or exercise additional rights not directly addressed in the agreement. So while studios would seek to retain the right to train artificial intelligence models based on writers' work, such as scripts and screenplays, writer’s could argue the explotation of their material is a breach of their reserved rights. There are clearer protections for writers in the agreement. AI cannot autonomously author or make modifications to literary content. This second point aligns with recent copyright concerns. Today, any content produced by AI with no human editing cannot be protected under copyright (in most countries anyway). By the time a script makes it to the final footage or media, it has usually been refined and changed through several increments by different parties or experts. Another key agreement struck was that any work produced by AI cannot infringe upon the distinct rights of a writer. AI-generated work basically won’t affect a writer’s compensation or credit. The use of AI is permissible only if writers willingly choose to use it. Ensuring transparency, studios are now obliged to inform writers whenever AI played any part in the materials they receive.
This progression in negotiations appears to largely cater to writers’ demands, notably setting clearer boundaries concerning AI's involvement, but also for enhanced pay and benefits. All of this seems rather short term. It fails to address other challenges of script submissions and possible accusations of AI-driven plagiarism. It’s nearly impossible to distinguish AI-generated text from human-written text, especially with some deft editing by a human in the loop. Furthermore, issues such as residuals, censorship, ratings, and unauthorized use of personal likenesses in AI-generated content remain unsolved. The accord, though progressive, leaves room for interpretation in several areas and might necessitate additional negotiations or guidelines to address these gaps.
Parallel to the watershed discussions on scriptwriting, the actors were striking too. The fear among many actors is the potential for studios to create generated replicas of performers, undercutting their rightful compensation and credits. These "digital twins" threaten to reduce actors to mere templates for AI, cutting both compensation and recognition. The strike isn't merely about wages—it’s a manifestation of a deeper discontent brewing in the heart of Hollywood. There was the 2012 instance where a digital effigy of Audrey Hepburn was resurrected for a chocolate commercial. Now the technology is far more advanced.
The issue hits harder for lesser-known actors. In a bid for exposure, some might agree to have their likeness captured and stored, a digital asset for studios to exploit indefinitely. This scenario is a play within a play, showcasing the struggle of lesser-known actors in the broader theater of Hollywood. There have also been some related court rulings, on the topic of music samples being used. In the case of Ludlow Music Inc vs. Williams (2000), a two-line lyrical ‘sample’ from the song ‘I’m The Way’ was used in somebody else’s work. This minimal sampling led to a dispute, with a judgment acknowledging the extent of copying as substantial, although just barely. This case underscores the vague boundaries surrounding the legality of sampling, even when it involves a seemingly negligible portion of the original work. There are ongoing lawsuits and copyright rulings on this topic. Last year I wrote a fairly detailed piece exploring the issue of AI and copyright; however, there are many case studies and developments since then. If you’d like a run down of AI and copyright concerns, you can check it out while I develop a more up-to-date letter on the issue.
Much of the impetus behind the strikes, for actors and writers alike, arose due to the eminence of streaming. These platforms, with their subscription-based model, have turned shows and films into lures for potential subscribers, challenging the traditional metrics used to evaluate and compensate the creative minds behind them. The debate around residuals is crucial, as the streamers hold their viewership numbers close to the chest. There was a cutthroat proposal by the AMPTP, which suggested a one-time compensation for background performers in exchange for eternal rights to their digital likeness, angering those on the side of the workers. The wealth disparity within the acting community further complicates these discussions. While headline-grabbing stars amass fortunes, a significant portion of actors live on the precipice of financial instability. Their reliance on residual payments is not just about fair compensation for their craft, but a means to secure basic necessities like health coverage.
Actors have also been licensing their voices for AI models. The actor Val Kilmer had his ‘voice’ recreated through Sonantic's deepfake technology, allowing him to ‘speak’ once again, after he had lost it to throat cancer. In another example, deepfake technology enabled David Beckham to deliver his message in nine different languages as part of a campaign against Malaria. There are many benefits to the use of AI in sound technology. On the one hand, actors can now receive commission for work they never have to perform. They have broader reach, in terms of licensing options, and could likely scale up their number of auditions and projects. The rights over their voice samples is reminiscent of the music sampling in the previous paragraph. You can read a previous letter exploring this issue in more detail here.
There is also the issue on the side of the audience. As viewers of the synthetic medium, our understanding of reality will be shaped to conform with the requirements of the medium. The limitations of the medium become our limitations. This medium processes words and images as statistical relationships, offering only a superficial reflection of the multifaceted web of real-world entities. This approach is a low-dimensional rendition, lacking the depth and richness of genuine human cognition. Human thought and societal structures are uniquely shaped by genetics, epigenetics, and cultural transmission. These factors interplay to create a multidimensional tapestry of understanding, making human cognition and society unparalleled in its depth. Contrasting this with LLMs (large language models) underscores the vast difference between genuine human understanding and the simplified, biased yet data-driven approximations of these models.
These are vague representations of real people, with narrow, simple and functional relationships within their community. Even here, this is a simulacra of a real community. That is largely what we have left over in an ever-digitizing world: a pool of signs interjecting, over-riding and subsuming one-another, until the original reality that created them becomes obscure and unreachable. Our anthropomorphic lenses and dopamine-seeking brains will not be able to peer through the veil. A person with a unique background and identity is replaced, in the network, by a sign. In this electric age, we no longer have access to the ground reality.
Electric media presents signs, which are used to replace the real thing they were supposed to indicate. Consider distinctions between social groups stemming from historical structures. For instance, while some cultures are deeply rooted in oral storytelling traditions, others, like the USA, are anchored in printed media. When the fruits of oral traditions are attempted to be translated into electronic media, they are distilled into signs, symbols or stereotypes. These indirect representations are easier to communicate and understand by a wide audience, but they lack the depth and richness of the original story. A complex cultural ritual might be reduced to a simple dance or a specific costume in a TV show. The sign itself may be used while the original culture, based on storytelling and other spoken word practices, dwindles into nothingness. In a previous letter, I explore this idea in some depth.
Many issues we have just stumbled upon have already been experienced by and, to a degree, dealt with in, China. Synthetic news presenters go back at least to 2018, when Chinese state news agency Xinhua and search engine Sogou introduced pioneering 2D newsbots. Their images were drawn from videos, while their motions and voices were driven by machine learning. Two years later, the broadcaster upgraded to 3D-rendered avatars produced using “multimodal recognition and synthesis, facial recognition and animation and transfer learning.” While broadcasters can use AI-generated talking heads to save time and money, propagandists can use them to gain an aura of newsy credibility. For example, an unidentified group used Synthesia, a web service that makes AI-generated characters, to generate fake news clips from a fictional outlet called Wolf News. One clip attacked the U.S. government for failing to take action against gun violence, while another promoted cooperation between the U.S. and China. One consideration is how long it took us to establish rights over the use of our DNA. Even that battle is far from over. We still have limitations on those biological rights, even though many see the importance of keeping this well within our rights to use and deny the use of by other parties as we choose.
This dissonance resonates through to the recent staging of my play, "The Worst Kind of Love.” Earlier this year, I gathered with some friends and began an experiment. Together, we came up with six stage plays and casted actors from the Edinburgh region in Scotland, and hosted a stage production under the theme of "the Worst Kind of Love". However, the writing process was divided among six different groups, with each group responsible for one play. One group consisted of two real people and no AI in the writing process, while the others involved a pair of people using AI to generate a play. In the five groups that used AI, the human in the loop interpreted the theme but faced limitations on the number of prompts, words, and the type of refinements they could make. In a sense, the groups using AI used the theme of the "Worst Kind of Love" with a paragraph or two as the prompt, and the AI did the rest, with only technical edits allowed. The result was that both judges could tell which piece was AI-generated and which was real out of the six plays. However, the two winning plays, tied, consisted of the real writer's work as well as an AI-generated one. In other words, sometimes we can detect AI-generated content but still be satisfied with it in some contexts. Just a few months after this event, I was alarmed to find at least ten pieces of work publicly admitted to be AI generated, presented the Edinburgh Fringe Festival. The ratio of AI-generated media to human-created artwork is always increasing in favour of the AI. We are becoming performers for the plays hallucinated by an ill-trained text regurgitator. Perhaps it’s time to seize control of technology to avoid a loss in the quality of our arts and cultural inheritance. AI should represent us, not the other way around.
Read on to learn more about AI, law and culture.