Emerging Technologies

Who owns the song you wrote with AI? An expert explains

View of a musicians board.

Musicians’ styles are being replicated by music-generating AI. Image: Unsplash/marcelalaskoski

Douglas Broom
Senior Writer, Forum Agenda
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Generative Artificial Intelligence

Listen to the article

  • Artificial intelligence potentially empowers us all to become creators – but who owns the outcome?
  • The World Economic Forum has warned that copyright laws need to change to keep up with the potential of AI.
  • Here, a professor of technology discusses how AI is disrupting our ideas – and laws – around intellectual property.

Artificial intelligence offers even the least musical among us the chance to get in touch with our inner songwriter. But what happens if you create a hit? Who owns the copyright? And what about the artist whose style is being plundered to create the AI hit? It’s a question troubling lawyers and digital media experts.

Which is why we sat down with New York University’s Professor Arun Sundararajan at the World Economic Forum’s 2023 Annual Meeting of the New Champions in Tianjin, China, to seek answers to a question that’s far from theoretical.

“Generative AI systems don't just generate new content in abstraction, they can be tailored to generate content in the style of a specific person,” Professor Sundararajan explained. “You can create new Beatles songs. You can write a poem like Maya Angelou.

Which is exactly what happened recently when a series of cover versions of popular songs started to appear on TikTok. It quickly emerged that the artists performing the tracks had never recorded them – they had been entirely produced using artificial intelligence (AI).

“What’s the point of having intellectual property law if it can't protect the most important intellectual property … your creative process?” Sundararajan asked.

The Forum’s 2023 Presidio Recommendations on Responsible Generative AI warned it was “essential for policy-makers and legislators to re-examine and update copyright laws to enable appropriate attribution, and ethical and legal reuse of existing content.”

Here’s a summary of our discussion with Arun Sundararajan, Harold Price Professor at New York University Stern School of Business.

Who owns what we create with AI?

Sundararajan: This is one of the central policy and consumer protection issues when thinking about the governance of generative AI. Generative AI systems don't just generate new content in abstraction, they can be tailored to generate content in the style of a specific person.

You can create new Beatles songs. You can write a poem like Maya Angelou. And, you know, at this point, the ownership of a person over their creative process, over their intelligence, starts to get challenged by technology.

One way to think about this is: ‘What's the point of having intellectual property law if it can't protect the most important intellectual property – your individual intelligence, your creative process?’

In the past, we've never really asked this question because it was assumed by default that, of course, you own how you create things. And so at this point in time, we have to extend intellectual property law to protect not just individual creations, but an individual's process of creation itself.

Is it fair to say our identities are at stake?

Sundararajan: I think one's creative process is part of one's identity as a human, but it's also an important part of one's human capital. You know, you can spend decades becoming really good at doing things in a specific way, and you have an incentive to do that because you own it and because you can enjoy the spoils, the returns from all of that investment.

The trouble is that now a generative AI system can take hundreds of examples of what an individual has created and start to replicate their creative process, in some way stripping away their human identity or taking part of their human capital away from them.

And this is something that we are seeing a lot in the creative industries, in the art industry and the music industry. Cartoonists' styles are being replicated by art-generating AI. Musicians’ styles are being replicated by music-generating AI.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Are there implications for business?

Sundararajan: It’s not just an issue for creative artists. Someone could be really skilled at business development in a company. That talent, that know-how, those years of experience that are rendered into email exchanges with clients, a particular style of talking to a particular potential client, a particular sequence of messages, a particular sequence of phone calls that leads to you closing the deal...

And when this business development executive leaves the company, of course they leave their work product behind. But now this work product can be used to create a digital replica of them that they may not have intended to leave behind, but which they can’t take with them the way we have always taken our human capital with us if we move jobs.

What is the difference between mimicry and replicating something human?

Sundararajan: Once something is successful, people start to mimic that style. Because if I'm a musician and I want to imitate someone's style, there will still be differences between my style and theirs that reflect my talent and my creative ability. With an AI twin, in some sense you are creating an exact replica rather than simply mimicking the style.

I think the second big difference is the scalability of this. Once this is encoded into an AI system, new creations can be generated at a breathtaking pace.

Who is most at risk from this?

Sundararajan: If you're an incredibly famous artist there's very little danger, because if someone generates a replica of your music, you can simply say ‘It’s not mine’ and then it won't be as popular. You've got the brand that allows you to get the economic returns from your creations.

On the other hand, if you're an up-and-coming band that isn't very well-known and you start to do somewhat well on Spotify and someone encodes into AI your style of music and starts to generate, hundreds of different examples, then your ability to even build that brand can be curtailed before you had the chance to do it. And so you're unable to get the economic returns associated with either your talent or with your human capital.

So where does the law stand at the moment?

Sundararajan: Every AI system we use was created by training it on examples. A lot of discussion has focused on whether it’s OK for AI companies to use other people’s creations for this. The law is unclear at present.

Some people argue that this falls under the fair use doctrine, which exists in the US, the EU and China (although it may be called different things in different places) where if you're transforming what you're using into something sufficiently different in a way that won't affect the commercial value of what you're using, then it's OK, you're not infringing on copyright.

The law is also unclear today on the ownership associated with something generated by AI, a particular creation. So if an AI system writes a story or generates a piece of art or composes a song, in some jurisdictions, if it is completely AI-generated with no human participation at all, nobody owns it, it's in the public domain.

If there's enough human assistance, such as providing a storyline which the AI completes for you or you outline a song and the song is then AI-generated, then you can continue to own the copyright.

On the issue of who owns the creative process, there seems to be little or no law that is giving us a definitive answer on how we can reclaim ownership of our creative process.

What's likely to happen is the US, the EU or China – one of these three – is [going] to take a leadership role and define the first set of guidelines and laws around individuals’ ownership of their creative processes and the use of data to train something like ChatGPT.

Have you read?

Why is the question of who owns your intelligence so important?

Sundararajan: The biggest difference between generative AI and the AI that preceded it is that generative AI can create entirely new content based on past patterns or examples. This means that it can create new content in the style of a particular person. So for the first time, the ownership of a particular individual's creative process or their intelligence is up for grabs.

And if we don't retain ownership of our intelligence and creative process, then individuals will have a much lower incentive to develop that intelligence, or that human capital, in the first place. And this is really bad for a capitalist society.

Ceding our creative process to an AI poses a challenge to creators – it takes away their identity, it gives them less incentive to develop the ability to practice their art. Some people might argue that's OK because, for the rest of the world, AI systems are going to generate a much greater variety of art, music and literature.

I think the jury is still out on which of these two camps is going to be right. But certainly being able to embed someone's creative process into a generative AI system takes away a bit of their humanity, a bit of their identity.

How do you see the future?

Sundararajan: Well, in the broadest sense, we need to retain ownership of our intelligence as humans. How do we update intellectual property law to protect not just individual creations, but to protect an individual's creative process?

There are some promising early steps towards drawing the line between AI creation and human creation. In many jurisdictions, something that is generated by an AI system has to be marked as such. And if you are interacting with an AI that appears like a human, you have to be told that you are interacting with an AI.

I think a logical next step would be to ask the question: ‘If an AI is generating these artefacts, these objects, these pieces of art, who owns the creations of the AI?’ And then the step after that is to decide who owns the creative process if that creative process mimics a particular individual.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesIndustries in Depth
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

The future of learning: How AI is revolutionizing education 4.0

Tanya Milberg

April 28, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum