You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation
Machine Learning

Hollywood’s Fight Against A.I. Will Affect Us All

Screenwriters, actors, authors, and artists are fighting to ensure that human beings are not shunted to the margins of our culture.

Vuk Valcic/SOPA Images/LightRocket/Getty Images
A rally in London in solidarity with the Screen Actors guild strike, protesting the replacement of workers in Hollywood with A.I.

Hollywood is caught up in its own science fiction tale. Actors and screenwriters went on strike seeking a fair share of an entertainment economy now slanted toward streaming. But as writers’ rooms and movie sets sit dark, another fear lurks: that the industry may be on the cusp of a new metamorphosis, involving artificial intelligence, which could usurp the role of writers and actors altogether. Artists and book authors share similar worries and have filed copyright lawsuits against A.I. platforms, seeking to protect their life’s work and livelihoods. These efforts to barricade against the ramifications of A.I. are the early acts of a drama that will affect not just employment prospects for creative professionals but the very shape of our culture, discourse, and dealings with one another. 

A.I. is not new to Hollywood. Producers rely on it to analyze viewer behaviors, to age actors, and even to bring dead performers back to life in sound or on screen. But the dawn of generative A.I. powered by large language models has raised the prospect that chatbots could be used to generate plots or scripts for crime shows, animated series, and other formulaic television series, with human writers left only to punch up the computer-generated work. Actors worry that studios will use A.I. to scan and replicate performers, drastically cutting the costs and complexity of production.

The appeal to industry executives is undeniable. Lower costs mean higher profits. A.I. may also boost revenue in other ways: Hollywood has reams of data on what audiences want both collectively and individually. A sector that has always had to appeal to mass markets may be able to offer custom, on-demand television, thanks to artificial intelligence. Books and art could go the same way. 

Writers, actors, authors, and artists are striking and suing to assert their stake in these upheavals. But the fight over A.I. and creative work implicates us all. Language-based A.I. models work by digesting vast quantities of existing content, tracing patterns, and then drawing on what they have learned. We marvel at their ability to generate prose that seems indistinguishable from what a human could pen; one small experiment showed Harvard professors giving chatbots high grades on their essays. Yet A.I.-generated creations are innately derivative and, by definition, at some level predictable. It is true that critics for a century have accused the entertainment industry of producing unoriginal and hackneyed content and even conditioning audiences to lap up such fare. But despite commercial constraints that can drive toward sameness, writers, actors, and other creative talents have always managed to produce original, thought-provoking, and exciting works of art.

The changes wrought by A.I. may have existential implications not just for scriptwriters and actors but also for audiences—namely all of us. The premise of generative A.I. is that content production is based on what went before. After all, we educate kids by having them imbibe readings, formulas, theories, stories, and pictures that train their brains to generate new words, images, and ideas as needed—whether to complete an assignment, persuade a jury, or forge a friendship. The large language models of artificial intelligence were invented to appear to mimic these brain functions.

Yet, of course, human intellect is more than just the sum of everything we’ve read and seen. Our very humanity adds unpredictable, individualized flavor and texture to the raw ingredients our education offers. No two people perceive the same set of facts identically. Original perspectives, new linkages between disparate things, questioned assumptions, and inventive visions are what propel artistic, scientific, and conceptual breakthroughs. Our responses to prompts are informed not just by what we’ve read or know but by irrationality, fancy, and fantasy. Originality springs not just from ingested content but from life experience, social dynamics, and flights of imagination. Writers and actors mine minute human encounters—gestures and silences—to capture the subtlest nuances. To be human is to have pain, joy, shock, trauma, euphoria, surprise, and more, which shape what we think and say in ways that even the most learned machine may never be able to replicate. When it comes to actors and singers, the power of their work lies in interpretation rather than replication. Different styles, cadences, artistic methods, and personalities are what makes it worthwhile to revive old classics and record new covers of favorite songs. 

If we aren’t careful, we may face the risk of an increasingly derivative culture, where a growing array of available products includes only a thin overlay of human creativity; one where audiences are primed not to notice what they are missing. It is, of course, possible that farsighted entrepreneurs and business leaders will carefully calibrate where A.I. is used, touching only those functions that can be performed just as well without human ingenuity. But if our collective experience with social media is any guide, A.I. may take over far more than we envision. If it ranges too far, the shift toward a secondhand, machine-made entertainment landscape risks undercutting the catalytic role that culture, at its best, can play. Piercing novels, poems, and films have unique power to help audiences bridge social and cultural divides by putting them inside the heads of characters unlike themselves. Books and films unleash empathy, spur questions about settled orthodoxy, and expose our blind spots. With an algorithm feeding us what we want to see and hear, it may become easier than ever to avoid unsettling truths.

An A.I.-infused culture may also obscure the quest for truth more broadly. A meeting between President Joe Biden and the heads of major A.I. companies yielded a set of voluntary agreements, including a commitment to “watermark” A.I.-generated content. The administration is rightly alarmed that chatbots are becoming increasingly indistinguishable from humans, though it is unclear that a technical fix exists for that problem. When it comes to writing, art, or performance, much of what moves us emotionally and intellectually is the knowledge that a poem, painting, or scene was conjured by another person. That’s why we read biographies of acclaimed novelists or Google to learn more about the star of a film we just saw. Contemplating Jewish Chattanoogan Arthur Golden’s vivid first-person account of the life of a Japanese Geisha; artist Lee Krasner striving for recognition alongside her more famous husband, Jackson Pollock; or onetime Tinseltown bad boy Robert Downey Jr. playing a conniving villain in Oppenheimer helps us access the multitudes contained inside each of us. Art itself, and our own understandings and imaginings about the person who created it, combine to yield the full experience of artistic appreciation.

The notion that there might be no author, actor, musician, or painter behind a creative work raises fundamental questions of how and why we connect to art. At its most extreme, delinking artists and arts could gradually render brushstrokes, musical notes, and words on paper just that, devoid of the artistic meaning we impute. If we lose the ability to distinguish between man- and machine-made art, the power of the former may also erode. Over time the knowledge that art and writing may be machine-made could prompt us to erect defenses lest we discover that we were moved emotionally by a poet or soloist who does not exist or by a for-profit entity deliberately seeking to manipulate us. 

Concerns with A.I. and authenticity extend beyond art into culture, potentially reshaping our critical faculties, openness to ideas, and ability to engage in reasoned discourse. Long before large language models were rolled out, social media had fueled a firestorm of disinformation, motivated by ideology, greed, and, in some cases, genuine misapprehension about things like the spread of Covid-19. We have learned that social media, as currently constituted, weaponizes human nature, jet-fueling some of our worst instincts such that false information travels further and faster than facts. Even the word “disinformation” has become tainted as code for claims that some particular political flank seeks to discredit.

A.I. will make disinformation cheaper to create, disseminate, and personalize, allowing work that once took armies of human trolls to be carried out largely by algorithms. The cost of launching a fusillade of one-star reviews on Goodreads to tank a book before it is even published is already next to zero. In authoritarian societies where citizens lack access to credible information, they simply tune out of public life, not knowing what or whom to believe and unable to make informed decisions. An A.I.-infused profusion of disinformation could accelerate civic disengagement and cultural balkanization. 

A.I. also has implications for free expression. Free speech has always encompassed not just the right to shout at the top of one’s lungs but also the opportunity to receive and impart information, engage in a give and take, to persuade and be persuaded, and hunt for the truth. These values, at the core of why free speech is protected by the First Amendment and in international law, are all predicated on authentic communication between human beings. They depend upon our ability to discern who is on the other side of a conversation and to weigh their claims accordingly. In an environment where, short of face-to-face encounters, the identity and motives of speakers are increasingly obscured, the very value of communication and speech may wither.

A.I. may also shape our critical faculties. When I take a wrong turn or don’t follow directions properly in the car, my GPS system evinces no irritation and instead matter-of-factly recalibrates in light of the mistake (“Make a U-turn if possible” is about as tough as it gets). For those of us old enough to remember the days of irate backseat drivers wielding folding maps, this equanimity comes as a great relief. Yet as we start to rely on A.I. to grade papers and train youth, such standard, preprogrammed responses will increasingly replace spontaneous human reactions. It is hard to know what is lost by replacing an essay covered in red pen that corrects grammatical and syntax errors (with occasional snark) with a machine-made screen that neutrally corrects errors. While A.I. encounters may eliminate friction and lessen the shame of being corrected, draining too much emotion, spontaneity, and variation from such encounters runs the risk of subduing the human experience. Personal encounters imprint on our psyches in ways that interactions with machines do not; we wax nostalgic about favorite teachers in ways we never will about chatbots. We are hardwired to react to one another’s emotions, which help us hone our own instincts and intuitions. Children who grow up playing video games, buying things online, and being tutored by chatbots may lose access to the range of human reactions that help us refine what we say and how we behave based on the response we hope to elicit.

As A.I. technology evolves, its potential effects on creative expression, culture, and discourse must be central to conversations about policy and regulation. No one is calling to ban or freeze A.I. The Writers Guild of America wants to make sure that screenplays and original intellectual property cannot be generated by chatbots but demands no constraints on the use of A.I. in editing. The Authors’ Guild wants novelists and nonfiction writers to be compensated when their work is fed into large language models. PEN America, which I lead, wants policy solutions and regulations concerning generative A.I. to support free expression, speech, creativity, and innovation.

If the rapid rise of social media taught us anything, it is that by the time a technology is distributed on a mass scale, its harms can become virtually impossible to contain. There are powerful imperatives to introduce A.I. as soon as possible to help doctors discern patterns in mammograms that may be invisible to the human eye or render hazardous weather predictions more accurate. But there is no comparable urgency when it comes to many of the contemplated uses of large language models. Figuring out how to actually affix markers allowing consumers to distinguish between A.I. content and human output or limiting public access to certain A.I. tools may give experts time to track and understand platforms before they overtake us. Innovative licensing schemes will be needed to adequately compensate human content creators for having their material scraped into A.I. training sets that can generate replicas of their work. Regulation is needed to guarantee independent researchers broad access to A.I. data in order to help us understand how our lives are being reshaped. Similarly, oversight bodies, like the one envisioned in the EU’s draft A.I. law or Meta’s Oversight Board (on which I sit), comprised of human rights and free expression experts, should be empowered to review and weigh in on industry developments.

As we take the first steps into a world until now represented only in fiction, we must view A.I. not just as a labor, technology, or political issue but as a cultural one. Screenwriters, actors, authors, and artists are fighting to ensure that human beings are not shunted to the margins of our culture. All those who enjoy art, watch films and television, read books, and absorb information online have a stake in ensuring those activities are not undercut by a hasty transformation that, albeit unintentionally, could leave much of what enriches our lives on the cutting-room floor.