David Greene, the former longtime host of NPR’s “Morning Edition” and current host of KCRW’s “Left, Right, & Center,” is suing Google, alleging that the company used his voice without permission in its NotebookLM AI tool. The lawsuit, reported by The Washington Post, centers on a male podcast voice within NotebookLM that Greene claims replicates his distinctive cadence, intonation and even habitual filler words like “uh.”
NotebookLM, launched in , is an experimental feature within Google’s broader AI efforts. It allows users to upload notes and documents, then interact with that content through an AI interface, including the ability to generate podcasts with AI-generated hosts. The core of the dispute lies in the similarity between Greene’s professional speaking voice and the voice Google uses for this podcasting function.
According to Greene, the realization that Google may have replicated his voice came after a surge of emails from friends, family, and colleagues pointing out the uncanny resemblance. He described his voice as “the most important part of who I am,” highlighting the deeply personal nature of the alleged infringement. This isn’t simply a matter of lost commercial opportunity, but a concern over the unauthorized replication of a core aspect of his professional identity.
Google, however, disputes the claim. A company spokesperson told The Washington Post that the voice in NotebookLM’s Audio Overviews is based on a “paid professional actor” hired specifically for the purpose. This assertion directly contradicts Greene’s belief that his own vocal characteristics were used as the basis for the AI-generated voice.
The technical details of how Google creates these AI voices are not fully public. However, the process generally involves analyzing large datasets of speech to create a model capable of generating new audio that mimics the characteristics of the source material. This can involve techniques like voice cloning, where an AI learns to reproduce the nuances of a specific speaker’s voice. The lawsuit raises questions about the ethical and legal boundaries of such technologies, particularly regarding the use of individuals’ voices without their explicit consent.
This case arrives amidst a growing wave of concerns surrounding AI-generated content and its potential to infringe on intellectual property rights and personal likeness. A similar dispute arose last year when actress Scarlett Johansson objected to OpenAI’s creation of a ChatGPT voice that she felt too closely resembled her own. OpenAI subsequently removed the voice in question.
The Johansson case, and now Greene’s lawsuit, underscore a critical challenge in the rapidly evolving landscape of AI: defining the line between legitimate imitation and unauthorized replication. While AI voice generation offers exciting possibilities for content creation and accessibility, it also presents risks to individuals whose voices and likenesses could be exploited without their knowledge or permission. The legal framework surrounding these issues is still developing, and these cases are likely to play a significant role in shaping future regulations and industry practices.
The core of the legal argument likely rests on whether Google’s use of the voice constitutes a violation of Greene’s right of publicity – the right of an individual to control the commercial use of their name, image, and likeness. Establishing a direct link between Greene’s voice and the AI-generated voice will be crucial to his case. The fact that numerous individuals have noted the resemblance will likely be a key piece of evidence, but Google will likely argue that the voice is sufficiently distinct from Greene’s to avoid legal liability.
The implications of this lawsuit extend beyond David Greene, and Google. It raises broader questions about the responsibility of AI developers to ensure that their technologies do not infringe on the rights of individuals. As AI voice generation becomes more sophisticated and widespread, the need for clear legal guidelines and ethical standards will become increasingly urgent. The outcome of this case could set a precedent for future disputes involving AI-generated voices and the protection of personal identity in the digital age.
The case is particularly noteworthy given Greene’s long and established career in broadcasting. His voice is intrinsically linked to his professional persona, built over decades of work on NPR and KCRW. The alleged unauthorized use of that voice, represents a potentially significant harm, not just financially, but also to his professional reputation and identity.
