Google’s announcement that its Gemini app now writes music for you isn’t just another product update; it feels like a symbolic surrender to a long-standing critique of Big Tech: that creative work is now simply another checkbox for a machine.
Yesterday, Google launched Lyria 3, a new feature within the Gemini app. This allows users to generate 30-second tracks, complete with lyrics and cover art, from a text prompt or a photo. The process requires no instruments, experience, or tactile skill.
Essentially, Lyria 3 is a LEGO set for “songs,” lasting about as long as a typical TikTok loop. Google suggests it’s designed for YouTube Creators, a reasonable assessment given the limited track length.
However, the underlying issue extends beyond this single feature. The proliferation of AI-generated art, including music, raises fundamental questions about the nature of creativity. This represents the core concern I want to address.
As Bob Dylan famously said, “Behind every beautiful thing, there’s some kind of pain.” Throughout history – in art, music, literature, and poetry – pain has often been the primary fuel for creation.
How to put this? The only pain Lyria 3 might experience is a server-overload alert, not heartbreak. Real songwriters know that soul isn’t born in a 30-second prompt; it’s extracted through years of mistakes, late nights, losses, and small revelations.
Call it a toy if you like. Google likely will. They’ve even embedded a SynthID tag into the outputs, officially labeling the 30-second ditties as “AI-generated,” not “inspired.” This acknowledges copyright concerns but also implies that these aren’t truly works of art, but rather statistical by-products of pattern recognition.
The novelty isn’t the issue. Much of this has been possible in research labs and through APIs for years, with creators experimenting with generative music tools as collaborators. What Lyria 3 does, and what makes this moment significant, is normalizing the idea that anyone can “write” a song with a chatbot and a mood descriptor. This isn’t empowerment; it’s a devaluation of craft.
Just because you subscribe to an AI music generator like Suno, which offers more complex features, doesn’t make you an artist. Similarly, learning to write prompts for any Large Language Model and generating pages of text doesn’t make you a writer.
Imagine a world where every blog post is AI-generated, and every company churns out generic music for ads and social media. In that economy, a professional songwriter’s unique skill becomes as optional as knowing how to use a metronome.
You could ask Gemini for an “emotional indie ballad about a lost sock,” and you’ll get something. Whether it possesses genuine coherence or soul is left to the listener to decide. It’s fun to use with friends, for short-form videos, or to impress a date.
Video: Gemini Lyria Music Generation Feature – Socks, uploaded by Google on YouTube
The 30-second limit is deliberate. It sidesteps deeper legal and ethical concerns about training data and mimicking existing works by keeping outputs short and legally ambiguous. That’s a positive aspect.
Even within that limit, it’s now possible for someone with no musical training or cultural context to generate riffs, lyrics, and chord progressions that sound, to the casual ear, adequately musical. In an attention economy obsessed with shareability, “adequate” quickly becomes sufficient.
This matters because real songs – the ones that endure and carry human experience – aren’t just collections of musical elements. They’re shaped by story, risk, cultural memory, and sometimes contradiction.
Tom Waits once said, “I don’t have a formal background. I learned from listening to records, from talking to people, from hanging around record stores, and hanging around musicians and saying, ‘Hey, how did you do that? Do that again. Let me see how you did that.’”
That was the research and prompting of the past. It’s not simply about reducing time or freeing up schedules. It’s about the entire process, the interaction with other artists, humans, and ideas.
Those are qualities machines can mimic but not originate. When machines take the first pass at creation, and the commercial ecosystem embraces that output because it’s cheap and fast, the incentives shift – not gradually, but suddenly.
The record industry is already grappling with AI. Streaming services, publishers, and labels are experimenting with algorithmic playlists and automated composition. Gemini’s Lyria 3 extends that experiment to public perception. A whole generation may come to believe that “making music” means typing a description and choosing a style. Songwriting becomes a UX problem, not a craft.
This raises a critical question: in a world where AI can conjure up a passable hook on demand, what will distinguish professional artists? If the answer is only brand story or marketing muscle, we aren’t celebrating creativity; we are monetizing it out of existence.
Tech companies like Google will frame this as liberation. And, in a literal sense, anyone who’s ever wanted to hear a short tune about a sock’s existential crisis now can. But liberation without valuing the creator is simply consumerism in disguise.
Lyria 3 might be useful for GIF soundtracks, social clips, and viral TikTok reels, but it doesn’t render professional musicians obsolete; it makes their work less necessary to the platforms that reward hyper-consumable content. That’s a different threat than outright replacement: it’s obsolescence through trivialization.
If AI is to be part of musical creation, it should be as an assistant to the composer, enhancing ideas, not replacing them. What we’re seeing with Gemini isn’t collaboration but outsourcing. And the lesson for artists isn’t to fear the algorithm, but to demand clarity about where AI replaces labor and where it augments human sensibility.
Because once the marketplace equates the two, the humans who do the work will be left asking for royalties in a language no one else wants to speak.
Platforms like Deezer are building AI detection tools to flag and label AI-generated tracks, excluding them from recommendations and royalties, ensuring human songwriters aren’t buried under synthetic content and consumers can distinguish between AI and human creations. If you care about preserving real artistry in a world of text-to-tune generative models, support platforms that offer transparency.
I’m not criticizing Lyria 3. The idea of letting people turn a photo or a mood into a short track sounds like fun for casual use and creative experimentation. That’s what Google intends. However, as these models proliferate, we risk confusing novelty with art. And here, the blame doesn’t lie with the tech companies, but with us.
