You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Facsimiles

Artificial Intelligence Meets Its Worst Enemy: the U.S. Copyright Office

Washington’s legal eagles have held that these technical entities are incapable of creating art. At least one chatbot we spoke with agrees.

A panel from Kristina Kashtanova’s “Zarya Of the Dawn”
Kristina Kashtanova using Midjourney A.I.
A panel from Kristina Kashtanova’s comic book “Zarya of the Dawn,” which contains A.I.-generated images that the U.S. Copyright Office ruled cannot be copyrighted.

What is art? In recent months, the Supreme Court tackled this issue by wading into a copyright dispute between photographer Lynn Goldsmith and Andy Warhol’s estate over a portrait of Prince. Now the U.S. Copyright Office is jumping into the fray by first granting, and then revoking, the copyright of a comic book whose author used a “generative A.I. program” to illustrate it.

Kris Kashtanova, a New York–based photographer, wrote an 18-page comic titled Zarya of the Dawn. For art, she turned to Midjourney, which is described by its creators as “text-to-image AI where your imagination is the only limit.” She entered text prompts into the program, and it procedurally generated the artwork for her. Kashtanova then submitted it to the U.S. Copyright Office for protection, which it initially granted last September.

Kashtanova posted the notification on Instagram shortly thereafter to celebrate what she saw as a legal milestone. “I tried to make a case that we do own copyright when we make something using AI,” she wrote in the caption, noting that the artwork “hadn’t been altered in any other way” by her. The top left corner, where artist and writer credits are usually placed on American comics, lists her last name first and then “Midjourney” underneath it. “My friend lawyer, gave me this idea and I decided to make a precedent,” she added.

The Copyright Office somehow learned about her assertion and started a review. Kashtanova’s lawyers responded, the office said, by arguing that she had “authored every aspect of the work, with Midjourney serving merely as an assistive tool.” As an alternative, they also argued that portions of the work could be copyrighted “because the text was authored by Ms. Kashtanova and the Work is a copyrightable compilation due to her creative selection, coordination, and arrangement of the text and images.”

In a February 21 letter, the office told them that it was choosing the latter option. It rescinded her original copyright registration and issued a narrower amended one that did not cover the Midjourney-generated artwork. Instead, it was limited to the “text” and the “selection, coordination, and arrangement of text created by the author,” explicitly excluding “artwork generated by artificial intelligence.” The ruling appears to be the first of its kind by the federal government on how copyright applies to algorithmically created artworks.

The Copyright Office appears to have gotten it right. Silicon Valley is abuzz these days with the promise and potential of artificial intelligence. A.I. chatbots have been touted as potential replacements for doctors, lawyers, musicians, and even journalists like myself. Many of these chatbots or similar “generative A.I.” programs can be quite sophisticated, including ChatGPT, which I interviewed for this article.

Concerns also abound. In a recent Wall Street Journal op-ed, former Secretary of State Henry Kissinger, former Google CEO Eric Schmidt, and MIT computing dean Daniel Huttenlocher described ChatGPT as the “herald” of an “intellectual revolution” dwarfing anything in the preceding four centuries. Kissinger is better known for his foreign policy than his expertise in computing, but he has made a hobby of learning more about the subject in recent years. In 2018, he wrote in The Atlantic that human society was “unprepared for the rise in artificial intelligence” and warned that it could be “how the Enlightenment ends.”

His fears have not dissipated since then. “Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the beginning of the Enlightenment,” Kissinger and the other two men wrote in the Journal op-ed. They compared it to the printing press, which allowed the rapid dissemination of information beyond the traditional authorities and gatekeepers that ruled the medieval European world. “Generative AI will similarly open revolutionary avenues for human reason and new horizons for consolidated knowledge,” they predicted.

That novelty, they argued, will come at a cost. The Enlightenment and all of its hallmarks—the scientific method, deductive reasoning, a willingness to question authority and established truth—rely on investigative searches for the truth. Where ChatGPT and similar programs are concerned, the knowledge simply appears without any thought or creativity. “Inherently, highly complex AI furthers human knowledge but not human understanding—a phenomenon contrary to almost all of post-Enlightenment modernity,” the three men argued.

Like Kissinger, I am not an expert in computing—one of the few things we have in common. My knowledge of artificial intelligence comes largely from growing up watching Star Trek, where the android Data tries to paint or recite poetry in an attempt to move closer to humanity. I suspect that the average person’s assumptions behind the phrase “artificial intelligence” are similarly literal and Data-like: that it is human or near-human intelligence, but based inside a computer instead of an organic brain.

What Silicon Valley calls “artificial intelligence” at the moment is something more like the computer aboard the USS Enterprise, which can respond to voice prompts and answer a wide range of queries instantaneously. Where is the nearest solar system? How many Klingon warships are pursuing us? How many life-forms are on the planet below? These answers can come much more quickly than any human could give, but they also do not reflect creativity or imagination in any meaningful sense.

I asked ChatGPT, the most famous of these chatbots, whether it counted as artificial intelligence. “As an A.I. language model, ChatGPT is indeed a form of artificial intelligence,” it insisted. “However, it is important to note that ChatGPT is not a fully autonomous, self-aware entity in the way that we might commonly imagine artificial intelligence.” ChatGPT, it explained, “operates through a complex system of algorithms and computational processes that allow it to process language input and generate language output in response.” To make the most of those algorithms, the bot “has been trained on vast amounts of text data and has learned to recognize patterns in language and generate appropriate responses.”

Then came the disclaimer. “However, ChatGPT does not have the ability to ‘think’ or ‘reason’ in the same way that a human being can,” the bot went on to explain. “It does not have emotions, experiences, or subjective consciousness. It is a machine designed to perform a specific task—namely, to generate language output in response to language input.”

This answer felt unsatisfying to me. So I asked ChatGPT whether something that wasn’t “fully autonomous” or “self-aware” could be considered artificial intelligence. “While ChatGPT may not have full autonomy or self-awareness, it is still able to perform complex language processing tasks through the use of machine learning algorithms and natural language processing techniques,” it replied. “Its ability to generate coherent responses based on input is a hallmark of artificial intelligence, even if it is not truly self-aware or conscious.”

I disagree. If I ask a human being whether they are self-aware or intelligent, they do not explain to me why I might be wrong. Nor do they offer a logical explanation for their own capabilities and why they are similar. They get mad. They take it personally. Maybe they ignore me dismissively and move along with their day. Maybe they yell and utter an expletive at me for insulting them. They understand the question behind my question: Are you stupid? Are you worthy of respect? Are you a social equal to myself, or are you inferior in some way? Human intelligence has an ineffable spark that ChatGPT—and other “A.I.” programs thus far—cannot replicate or even fully simulate.

That brings us back to Kashtanova’s comic book. The federal Copyright Act is understandably broad in scope, covering almost any “original work of authorship fixed in any tangible medium of expression.” But that scope is not unlimited, the office noted in its letter to Kashtanova. The Supreme Court ruled in 1991 that a phone book was not eligible for copyright protection because it lacked even a “modicum of creativity” as the Constitution and federal law anticipated. A photograph taken by a monkey was famously not eligible for copyright protection because the statute is limited to human creations.

The program Kashtanova used is similarly unimaginative. “Because Midjourney ‘does not understand grammar, sentence structure, or words like humans,’ it instead converts words and phrases ‘into smaller pieces, called tokens, that can be compared to its training data and then used to generate an image,’” the office explained. “Generation involves Midjourney starting with ‘a field of visual noise, like television static, [used] as a starting point to generate the initial image grids’ and then using an algorithm to refine that static into human-recognizable images.” In other words, it does not actually generate art—it algorithmically assembles randomized images until it produces something that humans will mistake for art.

Randomized and uncontrolled generation renders the result ineligible for copyright, the office said. While it noted that Kashtanova said she went to great lengths to enter prompts specific enough to produce her desired result, that input was not the same as actual creative work. The office compared her to a patron who commissioned a piece of artwork from a client based on specific instructions. If she had given the same instructions to a human artist that she gave to Midjourney, the office noted, Kashtanova herself would not be able to claim the copyright of the piece that the artist ultimately produced.

That explanation is a laudably insightful understanding of labor and creation by the Copyright Office. It may be even more apt than the government realized. A “generative A.I.,” after all, is only as good as the material it is “trained” on—the corpus of raw text or images that the algorithm then uses to produce simulacra of new text and images. In January, a group of artists filed a class-action lawsuit against Midjourney and other artbot creators for copyright violations after it allegedly used their art as part of the raw material for its bots. Getty Images announced that same month that it would sue U.K.-based Stability AI on similar grounds for using its vast archive for its own artbots.

Some A.I. proponents have billed chatbots and artbots as a way to now “disrupt” professions that relied on creative and intellectual capital. It’s one thing to replace taxi cabs with Uber and Lyft, however, and another to replace a public defender with a laptop. The underlying labor behind ride-sharing services is basically the same; it’s just the delivery mechanism and profit structure that changed. Perhaps someday technology will advance to the point where a Data-like android writes my articles for free instead of an actual person who receives wages and health insurance. That would first require the android’s programmers to actually understand the jobs they are trying to replace beyond “create an image” or “write a text”—something that, so far at least, appears beyond their abilities.