AI is capable of making music, but does that make AI an artist? As AI begins to reshape how music is made, our legal systems are going to be confronted with some messy questions regarding authorship. Do AI algorithms create their own work, or is it the humans behind them? What happens if AI software trained solely on Beyoncé creates a track that sounds just like her? “I won’t mince words,” says Jonathan Bailey, CTO of iZotope. “This is a total legal clusterfuck.”
The word “human” does not appear at all in US copyright law, and there’s not much existing litigation around the word’s absence. This has created a giant gray area and left AI’s place in copyright unclear. It also means the law doesn’t account for AI’s unique abilities, like its potential to work endlessly and mimic the sound of a specific artist. Depending on how legal decisions shake out, AI systems could become a valuable tool to assist creativity, a nuisance ripping off hard-working human musicians, or both.
Artists already face the possibility of AI being used to mimic their style, and current copyright law may allow it. Say an AI system is trained exclusively on Beyoncé’s music. “A Botyoncé, if you will, or BeyoncAI,” says Meredith Rose, policy counsel at Public Knowledge. If that system then makes music that sounds like Beyoncé, is Beyoncé owed anything? Several legal experts believe the answer is “no.”
“There’s nothing legally requiring you to give her any profits from it unless you’re directly sampling,” Rose says. There’s room for debate, she says, over whether this is good for musicians. “I think courts and our general instinct would say, ‘Well, if an algorithm is only fed Beyoncé songs and the output is a piece of music, it’s a robot. It clearly couldn’t have added anything to this, and there’s nothing original there.’”
Law is generally reluctant to protect things “in the style of,” as musicians are influenced by other musicians all the time, says Chris Mammen, partner at Womble Bond Dickinson. “Should the original artist whose style is being used to train an AI be allowed to have any [intellectual property] rights in the resulting recording? The traditional answer may well be ‘no,’” Mammen says, “because the resulting work is not an original work of authorship by that artist.”
For there to be a copyright issue, the AI program would have to create a song that sounds like an already existing song. It could also be an issue if an AI-created work were marketed as sounding like a particular artist without that artist’s consent, in which case, it could violate persona or trademark protections, Rose says.
“It’s not about Beyoncé’s general output. It’s about one work at a time,” says Edward Klaris, managing partner at Klaris Law. The AI-made track couldn’t just sound like Beyoncé, in general, it would have to sound like a specific song she made. “If that occurred,” says Klaris, “I think there’s a pretty good case for copyright infringement.”
Directly training an AI on a particular artist could lead to other legal issues, though. Entertainment lawyer Jeff Becker of Swanson, Martin & Bell, says an AI program’s creator could potentially violate a copyright owner’s exclusive rights to reproduce their work and create derivative works based upon the original material. “If an AI company copies and imports a copyrightable song into its computer system to train it to sound like a particular artist,” says Becker, “I see several potential issues that could exist.”
It’s not even clear whether AI can legally be trained on copyrighted music in the first place. When you purchase a song, Mammen asks, are you also purchasing the right to use its audio as AI training data? Several of the experts The Verge spoke to for this piece say there isn’t a good answer to that question.
During a panel The Verge recently hosted on the state of AI and music at Winter Music Conference, which included Bailey; Matt Aimonetti, CTO of Splice; and Taishi Fukuyama, CMO of Amadeus Code, an audience member asked just that. “What if I wanted to license my catalog to a company so its AI could learn from it?”
“Currently,” replied Aimonetti, “there’s no need for that.”
Even if an AI system did closely mimic an artist’s sound, an artist might have trouble proving the AI was designed to mimic them, says Aimonetti. With copyright, you have to prove the infringing author was reasonably exposed to the work they’re accused of ripping off. If a copyright claim were filed against a musical work made by an AI, how could anyone prove an algorithm was trained on the song or artist it allegedly infringes on? It’s not an easy task to reverse engineer a neural network to see what songs it was fed because it’s “ultimately just a collection of numerical weights and a configuration,” says Bailey. Additionally, while there are scores of lawsuits where artists were sued by other artists for failing to credit them on works, a company could say its AI is a trade secret, and artists would have to fight in court to discover how the program works.
“Getting to that point might only be available to the biggest artists that can afford it,” says Becker.
Copyright law will also have to contend with the bigger issue of authorship. That is, can an AI system claim legal authorship of the music it produces, or does that belong to the humans who created the software?
Arguments about whether code can be the author of a musical work in the US are over 50 years old. In 1965, the Copyright Office brought up this concern in its annual report under a section titled “Problems Arising From Computer Technology.” The report says the office had already received one application for a musical composition made by a computer, and it “is certain that both the number of works proximately produced or ‘written’ by computers and the problems of the Copyright Office in this area will increase.”
Despite this early warning flag, current US copyright law is still vague when discussing authorship of works that weren’t created by humans. For now, lawyers are still grappling with the implications of one ruling, in particular, which doesn’t involve computers or AI at all: it’s about a monkey taking a selfie.
The case centered on a crested macaque that picked up the remote trigger for a photographer’s camera and took photos of itself. The resulting debate was over which creator should own the copyright: the photographer who set up the camera and optimized the settings for a facial close-up, or the monkey that pressed the remote trigger and took the photograph.
Ultimately, the US Court of Appeals for the Ninth Circuit decided that the monkey could not hold a copyright. The court made two points: the copyright law’s inclusion of terms like “children” and “spouse” imply an author must be human, and although courts have allowed corporations to sue, corporations “are formed and owned by humans; they are not formed or owned by animals.”
Many outlets used the monkey selfie ruling to discuss implications about artificial intelligence and authorship. If a monkey can’t own a copyright, it goes, then what about a song created entirely by AI? Would authorship go to the humans who created the AI, the AI itself, or the public domain?
The heart of this problem is that current US copyright law never differentiates between humans and non-humans. But, the Compendium of US Copyright Office Practices actually spends a lot of time talking about how humanness is a requirement for being considered a legal author. In an internal staff guidebook for the Copyright Office, the Compendium has a section titled, “The Human Authorship Requirement.” There’s also a separate bit to address copyright when a work lacks a human author. According to the Compendium, plants can’t be authors. Neither can supernatural beings or “works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”
The Compendium has been updated to note that “a photograph taken by a monkey” cannot be given a copyright. But there’s nothing yet on AI.
A mashup of all of these weird problems happened just weeks ago. Recently, the developers behind Endel, an app that uses AI to generate reactive, personalized “soundscapes,” signed a distribution deal with Warner Music. As part of the contract, Warner needed to know how to credit each track in order to register the copyrights. The company was initially stumped with what to list for “songwriter,” as it used AI to generate all of the audio. Ultimately, founder Oleg Stavitsky told The Verge, the team decided to list all six employees at Endel as the songwriters for all 600 tracks. “I have songwriting credits,” said Stavitsky, “even though I don’t know how to write a song.”
It sounds like a ludicrous outcome, but preventing humans from obtaining copyright on AI-assisted works could limit our ability to use these algorithms for creative purposes. “If you accept AI-generated work as a new form of art and take away the intellectual property rights of the person who created the algorithm,” says Klaris, “you’ve basically said, ‘you’re out,’ and take away their incentive to create.”
Endel was able to list its employees as songwriters because, in the US, you only need someone to claim they authored a work. But if there’s pushback — like in the monkey selfie case — authors have to prove that they made the work in question. The same might have to be done for music and AI in order to establish any precedent about how to treat this type of material in copyright law moving forward. And there are a million ways to parse the problem.
For now, there are far more questions than there are answers. If you take these problems a few steps further, you get into issues around AI and legal personhood that start to get “existential,” says Rose. Can software be creative? What if an AI software’s creations belong to no one at all?
“We haven’t figured it out,” Becker says. “This road is literally being paved as we’re walking on it.”