Something unusual is happening in the music industry.
Not just new technology, not just new tools; a shift in what an “artist” actually is. Recently, AI-generated artists have begun to appear; not just songs, but entire identities. Some examples include Timbaland’s Tata Taktumi, an AI-generated artist developed through his company Stage Zero, positioned within a new AI-driven genre dubbed “A-Pop”. Tata is described as a hybrid AI-powered artist, not as a simple avatar, but a “living, learning, autonomous” entity.
Another example is FN Meka, a fictional rapper/avatar originally developed by Brandon Le in 2019. Self-described as a “virtual rapper,” FN Meka became the first AI-driven music project to be signed to a major label. However, following controversy over racial stereotyping, Capitol Records dropped the project in 2022.
At the same time, major labels are fighting a different battle.
According to BBC News, Sony Music alone has requested the removal of more than 130,000 AI-generated tracks that imitate its artists from streaming platforms.
Many of these deepfake recordings targeted major acts including Beyoncé, Harry Styles, and Queen. The true volume is likely significantly higher, as this figure reflects only one company’s enforcement efforts, not the total number of AI-generated tracks currently in circulation. At first glance, these developments seem separate. One is innovation, the other is a problem. But they are part of the same shift.
For most of modern music history, authorship was relatively clear. An artist created, a listener consumed. Even with multiple layers – producers, songwriters, labels – authorship remained anchored to a human source.
AI disrupts that anchor. It allows music to be generated without a clear origin or with a simulated one. A voice that sounds like someone. A style that resembles someone. An identity that feels familiar, but does not exist. Authorship used to be a given. Now it’s a question.
Who is the artist? The model? The person who trained it? The one who prompted it? Or the artist being imitated? There is no clear answer, and that is the problem. But there is another layer, a more practical one.
What does this mean for artists? For their income? For their position in the system?
Streaming has already shifted music toward volume and consistency – a transition that began in the early 2010s with the rise of platforms like Spotify and accelerated as algorithmic playlists started rewarding frequent releases over singular moments. Now imagine that volume without human limitation. An environment where music can be generated endlessly – in any style, in any voice. When creation becomes infinite, value doesn’t disappear; it relocates. Away from the music itself.
And toward what cannot be generated as easily – identity, trust, context. But even that is being challenged. If an AI can simulate a voice, recreate a style, and attach it to a believable narrative, the line between real and artificial begins to blur.
Not technologically, but perceptually. This has direct consequences. For streaming platforms, it raises a structural issue. How do you distribute revenue in an ecosystem where content can be generated infinitely? Where a growing share of that content may not be tied to a human creator? For artists, the pressure shifts.
Artists are no longer competing on output; they are competing on existence. This tension is already visible. In 2023, an AI-generated track imitating Drake and The Weeknd went viral before being removed from streaming platforms following pressure from Universal Music Group. At the same time, artists like Grimes have taken a different stance, openly allowing AI-generated versions of their voice under revenue-sharing models.
And for the industry, it creates a contradiction. The industry is building systems it’s trying to protect itself from.
Platforms have begun deploying detection and control mechanisms – Spotify has introduced impersonation rules, spam filters, and AI disclosure systems, while Deezer uses AI detection tools to label synthetic tracks and exclude fraudulent streams from royalties. At the same time, major labels rely on large-scale takedown requests and legal enforcement to remove AI-generated songs that imitate their artists.
Which raises a more uncomfortable question. Is this something that can actually be controlled? Or is trying to fight it simply a delay? Because historically, technological shifts are rarely reversed. They are absorbed, integrated, and eventually normalized. If that is the case, then the question is no longer how to stop AI in music, but how to work with it.
What kind of rules need to exist?
What defines ownership when authorship is unclear? How do we protect artists without restricting innovation? And perhaps more importantly, what happens if AI moves beyond being a tool? Not just generating music when prompted, but operating within systems. Learning, adapting, producing without direct human intention behind each output. At that point, the question is no longer about imitation. It is about autonomy.
My sense is that we are moving toward a world where music begins to resemble a kind of cultural marketplace – closer to how we consume food. Everything becomes available in parallel: human-made, AI-generated, hybrid. Labeled, categorized, positioned. And as with food, the distinction may not disappear; it may become a premium.
Human-made music could shift from being the default to being a choice. Something closer to an “organic” product, more intentional, more scarce, and likely more expensive. At the same time, taste may replace truth as the dominant filter. The question will no longer be “is this human?” but simply: do I like it?
The question is no longer whether something is real. But whether that distinction still matters to us. I don’t think we will be able to prevent this shift. Only to navigate it – and decide, individually, what we choose to value. And we are not fully prepared for that.
This is not just a legal issue or a technological one. It is a cultural one. Because music has never been only about sound, it has always been about connection. For millennials, music has been a way of locating ourselves in the world. Of finding people who think, feel, and see things the same way. It has been a signal of identity, of belonging, of perspective.
Music hasn’t just been something we consume; it has been a way of recognizing ourselves in others. And connection assumes something human on the other side. If that assumption begins to collapse, then the relationship between artist and audience changes.
And with it, the entire structure of the industry. We are not just entering an era of AI-generated music. We are entering an era where authorship, ownership, and identity are being renegotiated in real time. The question is not whether this shift will happen; it already is happening.
The question is whether we shape it. Or whether we arrive too late, trying to understand a system that has already moved beyond us.



