S-Curve Records president Steve Greenberg is an A&R veteran, a keen observer of the music landscape, and a previous contributor to Ross On Radio. His annual year-end essay focuses primarily on the emergence of AI as a music creator’s tool, but his top songs of the year playlist is always a highlight as well and can be heard here.
Happy 2025! The turning of the new year means we are exactly 500 years away from arriving In The Year 2525, the dystopian future described in Zager & Evans’ 1969 #1 hit. Let the countdown begin, and let’s hope we surpass expectations as a species.
Much has been written this year about the potential impact of generative AI on music creation and the music business, and I imagine 2025 will be the first year where consumers begin to notice that they’re regularly and knowingly interacting with music that has been created using AI. So I would like to devote this year’s essay to the future of musical creativity in an AI world. (This essay is adapted from a talk I delivered at Dartmouth University this fall. The entire talk can be found here)
One critique I often hear levied at AI-generated music at this early stage that AI-generated music is not “real” music, as it’s been “artificially” created by technology. So, let’s start with the question of what is real music, anyway? With the exception of the human voice itself, and I suppose playing percussion by clapping and stomping your feet, pretty much all other music created by humans uses some technology invented by humans. And pretty much all of those new technologies whether they were drums or electric microphones or flutes or synthesizers caused listeners a little bit of confusion at first encounter. So too with AI.
Ultimately, every one of the above-named innovations changed the way music was played and introduced new types of music that could be played. It’s no coincidence, for instance, that the electric guitar was invented just a couple of years before the dawn of rock ‘n’ roll. In the ‘60s and ‘70s, synthesizers were developed that created new sounds that literally no one had ever heard before in music. At the time they were considered weird novelties, but today they’re very much part of the palette of popular music. The same is true of digital sampling that became popular at the dawn of the hip-hop era. And autotune on vocals at the dawn of this century.
Autotune originally was invented as a way for singers who were less than perfect to cheat on their recordings, but now it’s just a tool, the way an electric guitar is a tool that makes an interesting sound and causes different music to be made than would’ve been made without it. New technology inevitably leads to new forms of music. And we generally can’t guess in advance what they might be.
While AI will have a profound impact on the future of music, it is likely that its real contribution won’t be deep fakes like the 2023 Drake/Weeknd mashup, the cause of much fear in the music industry upon its release. While there’s novelty right now to the idea of being able to make music that sounds just like your favorite recording artist, I suspect that ultimately imitating famous musicians will be akin to imitating famous painters. If I told you that I made new paintings that have perfectly imitated the style of Picasso or van Gogh, you might be curious for a second, but you wouldn’t hang them in a museum. Nobody values forgeries.
While you could imitate famous musicians, you couldn’t really do it in a way that was genuinely creative or as moving as the genuine works created by those actual people. Songwriters and recording artists don’t work in a vacuum. A person might write a song about flowers, but maybe that songwriter just had an argument with somebody, or maybe they stubbed their toe or just read something in the news that upset them or delighted them, and a million other things, and all those things at that moment impact the songwriter’s mood and the mood of everyone else involved in the creative process, and that impacts the song–that stuff of life is what humans bring to the equation.
So, telling AI to write a song about flowers in the style of Taylor Swift? Well, Taylor Swift never wrote a song in a vacuum and there’s no such thing as a generic Taylor Swift song, but that’s what you’d get, and that’s what the Drake/Weeknd mashup was. Every piece of creative work generated by any creative person is generated within a complex series of specific circumstances being experienced by the artist and their team at that moment.
Certainly, using synthetic vocals or music generated by AI in the style of a famous artist can have value if the human creator brings something new and exciting to the table. This kind of creative endeavor might even come to be considered the musical equivalent of fan fiction. Did you make a song with generative AI using, say, the voice of Elvis Presley as an interesting-sounding texture, or maybe that voice is singing the hook to a style of music that didn’t exist in Elvis‘s lifetime? In that instance, Elvis‘s voice becomes similar to when you sample a drum break from a James Brown record and loop it, and use it as the foundation of a hip-hop record.
But that sort of usage is ultimately the sideshow. What’s the real use of generative AI going to be in music? How will the technology be used to create something that could not have been created before? For instance, the major labels are envisioning a future where a majority of the music featured on social media platforms like TikTok will not be professional music, but will be user generated music, done with the involvement of AI. I’m not referring here to users making things that sound like Taylor Swift, but rather users just making their own original music, no matter how good or bad it is, by feeding prompts to AI programs. That will likely be a major new form of expression for average music fans,
Users will be able to make all those new songs for their TikToks because generative AI is potentially the most powerful utility ever put in the hands of musical creators. It is a utility whose sheer computing power allows the creative process to unfold much more quickly, and in some cases that enables doing things that would be far too daunting for a human to even attempt alone.
Cassie Kozyerkov, Google’s former chief decision scientist, explained it to me another way: She suggested that generative AI is like a drug for a creative person, allowing the artist to put on a pair of goggles that allows him or her to see things in a whole new way by presenting the artist with a potentially endless number of possibilities to consider, that the artist never would have imagined on their own.
In my lifetime, the single biggest evolution in music-making was the switch from analog music to digital music. We transitioned from playing music which required physical skills— you had to actually learn to play a guitar or a piano or a trumpet and that required some physical skill that you had to master, using some part of your body, and suddenly with digital music you didn’t. These days, any music you can imagine can make happen on your computer—and the new additional tool, AI, will further broaden what you could do, using even fewer physical skills.
But while you won’t need physical skill, what you will still need to make truly great music is creative talent. Just as you still need talent to make worthwhile music on your computers, you’ll need it when you use AI if you want to make something that has the ability to truly move people in an original way. Greatness still requires spark. And taste. I discussed this recently with Amanda Goodspeed, Meta’s former head of creativity, who emphasized how crucial taste is to the creation of art. In fact, taste is what ultimately makes an artist. It’s as important as pure musical skill when you’re trying to make music of value.
And that’s because art is mostly about making choices. Generative AI can make thousands of choices incredibly quickly, but it can’t exercise artistic taste. At least not yet. So, as humans and generative programs begin to collaborate more, it’s likely that AI will start to serve as a kind of sounding board for human ideas as well as making its own musical suggestions. Part of a feedback loop. Both the human and the AI will be making and evaluating each other’s suggestions.
AI could easily spit out some catchy soundalike Taylor Swift song on its own–and we know it’s already creating the kind of low-engagement music that gets used behind scenes in TV shows or in YouTube advertisements. But on its own, it’s not producing music that comes from the heart—the music that results from the creativity and skills of talented people coming together.
Not everybody agrees about this point, by the way. Chris Mattman, a former NASA engineer who I spoke with recently, believes that generative AI will eventually learn the creative process itself. He believes that if you give AI the infinite catalog of songs that it will, over time, internalize the very idea of taste and begin to emulate it.
What generative AI will certainly lead to is new genres of music, and maybe even new musical sounds. AI can generate lots and lots of terrible ideas really quickly that can then be dismissed by the AI or a human creator until something with promise emerges, and then the human can work with the AI to refine that, so it becomes a sound that might be appealing to people.
I could, for instance, say to a generative AI program, I want you to come up with a brand-new musical instrument sound that no one’s ever heard before, but that people will really like. And there’s a chance that at some point in the process of doing this, a new musical sound will emerge that will change music the same way the electric guitar or the synthesizer changed music.
What does this creative revolution mean for the music industry? Generative AI is able to create new music because it has ingested millions of pieces of musical information by scraping the Internet for any musical sounds it can get access to. A lot of this music is under copyright and owned by someone. So, even if a generative AI program created a brand-new piece of music that didn’t sound anything like Drake or The Weeknd or Taylor Swift, or any existing artist or piece of music, that AI program was still trained on lots and lots of copyrighted music. And that’s why music companies will want to get paid for all those future new songs created by average music fans for TikTok using AI.
One way to describe all the music that goes into the creation of AI music is by thinking of AI music creation as sampling on steroids. Essentially, AI is using micro samples from countless songs to come up with something new. Instead of a few samples on a record, like in hip-hop, there’s millions and millions of them and you can’t even identify them because they’re so numerous and so micro, but they are there. The music industry long ago figured out how to get paid when artists use their material as samples in new recordings. And now the industry is determined to get paid for the raw material that goes into the training of generative AI. Remember, it doesn’t matter if the output resembles your copyrighted music. What’s important is what went into the program.
Imagine a giant pile of cement, that just looks like a big dust pile–and then imagine a skyscraper. That pile of cement looks nothing like a skyscraper, and yet the skyscraper couldn’t have been made without using the pile of cement. And the builder, presumably, has paid for the pile of cement. Right now, the music industry is highly focused on getting paid for their pile of cement.
The concept of whether using large data sets of copyrighted music without permission in order to train AI would be permitted as fair use under US copyright laws is currently being tested in court via lawsuits, including one suit by Universal Music Publishing, and a few other music publishers, who’ve sued Anthropic for infringement. But it’s unclear how these suits will be decided.
But rather than relying on lawsuits, the major music companies’ emerging position is one of “OK we’ll give you permission to train your models—and you’re going to pay us. We’ll license our music to you and we’re going to get a royalty on every piece of new music that gets generated.” And because so much new music is going to be generated, even that micro-royalty is going to add up to a significant revenue opportunity for the music business.
That’s what’s most tantalizing about this imagined future: It’s all going to happen at scale. In the next few years, much more music is going to be created than was ever created in the entire history of mankind. We’re going to experience new sounds and new genres, and at a certain point the question will be “what’s the rate limit for the human ear–how many new genres can we consume? How do we sift through it all?”
The NASA data scientist, Chris Mattman, believes that pretty much anything you can imagine will ultimately come into being with generative AI because the difference between AI and you and I is that AI is all the humans, and you and I are each only one human. Generative AI has been trained on all the humans and on all pieces of art. It’s been trained on data that’s not even public. It’s been trained on real people and on fictional characters.
One thing that’s certain is that there will arise a new generation of music creators who will have direct access to the public, the same way that a generation of amateur video creators did a few years back, which really wasn’t possible before Youtube came along. And now, over 50% of all video consumed in the world comes from user-generated sources–a seismic change.
For creative people, being in a collaborative process with AI will almost certainly be the dominant mode of engagement. Just as it’s no fun watching two computers play chess against each other, people who are interested in being creative won’t want to outsource that experience to machines. They’ll want to use machines as tools, not as replacements. The only people who are likely to want to turn over the entire process to generative models are people who in fact can’t offer much creativity in the first place, or don’t need what they’re trying to make to be very creative.
The opportunity here for the music companies is that they already own the best databases of copyrighted songs. One course of action for them is to make their own AI generative model: Turn their copyrighted material and metadata into very structured data in the form of tables with rows and columns, and then either sell that data to AI companies—sell bricks instead of cement–or start their own businesses that put that data to work.
The six largest companies in the AI field, all tech companies, will have spent approximately $30 billion in the next 12 months acquiring data or licenses to intellectual property. So that’s the real opportunity for the music companies, not suing the AI business.
And on top of the foundational AI models, we can imagine people building apps that will be compatible with those models. Today, the App Store is a $1 trillion economy, so you will absolutely see enormous amounts of revenue generated by new applications that get built, new consumer experiences, novel forms of entertainment, different business models.
In 2023, there were 20 billion pieces of generative AI content created and during the first 90 days of 2024 there were nearly 100 billion creations using generative AI tools. We’re barely beginning to scratch the surface, the volume of AI created works is exploding, and it is likely that just as with video and user generated content, a majority of the music we consume in the near future will at the very least have been co-piloted by AI.
Finally, for all those still worried that human beings will eventually be cut out of the creative process, let’s consider this: Because so much AI generated music will exist in the future, there is a fear that in the future the only new music AI will have to train on is AI created music. That would create a very big problem, called model collapse.
AI models trained on human text or human sound can create high quality new products, but when you look at the statistical properties of those new songs or bits of text, you can tell they were produced by a model and not a person. Simply put, model collapse is when you try to train AI on things that were AI created, and at that point it all breaks down mathematically, because there aren’t the correct statistical properties to the inputs and you end up with degraded outputs. So, there’s an ongoing need for new human data. Thus, there will always be a role for new artists with new tastes, creating new music in new genres, with that music then being licensed out to train future AI models. You’ll always have to go back to humans for music to continue to move forward, or music will just stagnate.
I don’t foresee a future where humans forfeit the music making process entirely to machines. Rather, the question that concerns me is: In the future will generative AI be a utility used by creative and talented humans, or will talented humans become a utility used by generative AI programs to provide fresh raw material?
With that, I wish us all peace and happiness in 2025, and over the next 500 years, as well.
This story first appeared on radioinsight.com