The race to integrate generative AI into mainstream music experiences is heating up, with both Google and Apple making significant moves this week. While Apple’s approach focuses on playlist curation, Google is pushing the boundaries with a new feature allowing users to create original music tracks from text or image prompts.
Google’s Gemini AI assistant, powered by the Lyria 3 model, can now generate 30-second musical pieces. Users over the age of 18 will be able to input text descriptions – like “a comical R&B slow jam about a sock finding its match” – or upload images and Gemini will produce a corresponding audio track, complete with custom lyrics or instrumental arrangements. The feature is currently rolling out on the desktop version of Gemini and will soon be available on mobile. Google is also leveraging its image-creation model, Nano Banana, to generate custom cover art to accompany these user-created tracks, enhancing the sharing experience.
This move by Google isn’t simply a technological demonstration. it’s a strategic play to strengthen its consumer offerings in a competitive landscape dominated by OpenAI’s ChatGPT. Google’s Gemini 3 AI model, released in November, garnered widespread praise, prompting a response from OpenAI CEO Sam Altman, who reportedly initiated a “code red” to accelerate ChatGPT improvements. The addition of audio creation tools underscores Google’s commitment to staying ahead in the AI race.
Apple, meanwhile, is taking a different tack. With the beta release of iOS 26.4, Apple Music users will gain access to “Playlist Playground,” an AI-powered feature that generates playlists based on text prompts. These playlists will include cover art, descriptions, and a selection of 25 songs. This feature directly competes with a similar offering from Spotify Technology SA, signaling a broader industry trend toward AI-assisted music discovery and personalization.
The initial reaction from the market has been mixed. Shares of Spotify briefly erased gains following Google’s announcement, while Sirius XM Holdings Inc. Saw a similar dip. However, analysts at Bloomberg Intelligence suggest that Google’s move isn’t necessarily a “breaking point” for Spotify, but rather a potential catalyst for the platform to accelerate its own AI-driven music creation features.
The integration of AI into music creation isn’t without its challenges. The music industry has historically been wary of generative AI, viewing it as a potential threat to copyright and intellectual property. In 2024, major labels – Universal Music Group, Warner Music Group, and Sony Music Entertainment – filed lawsuits against startups Suno AI and Udio AI, alleging copyright infringement. Warner Music has since reached an agreement with Suno, and both Suno and Universal Music have established agreements with Udio to ensure the application operates with appropriate licensing and controls.
Google is attempting to address these concerns proactively. The company states that its Lyria 3 model is designed to avoid replicating the work of specific artists. If a user names a particular musician, Gemini will interpret that as “broad creative inspiration” and generate a track that shares a similar style or mood, rather than directly copying the artist’s work. Google also emphasizes that its training data for Lyria 3 consists of music that YouTube and Google have the rights to use, adhering to their terms of service, partner agreements, and applicable laws.
Apple, too, is navigating this complex landscape. While details regarding the AI powering Playlist Playground are less specific, the company is clearly aware of the need to balance innovation with respect for artists’ rights. The timing of these announcements, coinciding with ongoing discussions about AI and copyright, underscores the sensitivity of the issue.
The differing approaches taken by Google and Apple reflect their broader strategies in the AI space. Google is positioning itself as a leader in generative AI, pushing the boundaries of what’s possible with its models. Apple, is taking a more measured approach, integrating AI features into its existing products in a way that enhances the user experience without disrupting the core functionality.
The long-term implications of these developments remain to be seen. Will AI-generated music become a mainstream form of entertainment? Will it empower artists or displace them? These are questions the industry will be grappling with in the months and years to come. For now, the arrival of these new features marks a significant step toward a future where AI plays an increasingly prominent role in how we create and consume music.
