Deepfakes of Dead People: Rewriting History
- The emergence of increasingly realistic AI-generated video, exemplified by tools like OpenAI's Sora and Google's Veo 3, is raising concerns about the potential for widespread misinformation and the...
- "We might see trust in all sorts of media establishments and institutions erode," said Mayes, highlighting the broader societal implications of this technology.
- "There is and will be abuse as there has always been with celebrities and their valuable intellectual property," Roesler wrote in an email.He pointed out that when CMG...
The Erosion of Trust in a World of AI-generated Media
Table of Contents
The emergence of increasingly realistic AI-generated video, exemplified by tools like OpenAI’s Sora and Google’s Veo 3, is raising concerns about the potential for widespread misinformation and the erosion of trust in media and institutions. Experts warn that distinguishing between authentic and fabricated content is becoming increasingly arduous, even for trained professionals.
Protecting Legacies in the Age of AI
“We might see trust in all sorts of media establishments and institutions erode,” said Mayes, highlighting the broader societal implications of this technology. Mark Roesler, founder and chairman of CMG Worldwide, which manages the intellectual property of deceased celebrities like James Dean, Neil Armstrong, and Albert Einstein, echoed this sentiment. He noted that Sora is the latest in a long line of technologies posing challenges to protecting the legacies of those he represents.
“There is and will be abuse as there has always been with celebrities and their valuable intellectual property,” Roesler wrote in an email.He pointed out that when CMG began representing deceased personalities in 1981, the internet didn’t even exist, underscoring how rapidly the landscape of intellectual property protection is evolving.
Despite the risks, Roesler also acknowledged the positive potential of AI, stating, “New technology and innovation help keep the legacies of many historical, iconic personalities alive, who shaped and influenced our history.” CMG intends to continue representing its clients’ interests as AI applications like Sora become more prevalent.
The Limits of Current Detection Methods
OpenAI has implemented several tools to identify Sora-created content,including invisible signals,visible watermarks,and metadata. However,these measures are not foolproof. Sid Srinivasan, a computer scientist at Harvard University, explained that “Visible watermarks and metadata will deter casual misuse through some friction, but they are easy enough to remove and won’t stop more determined actors.”
Srinivasan believes that an invisible watermark coupled with a detection tool would be the most reliable approach, but access to such tools remains limited. “Ultimately, video-hosting platforms will likely need access to detection tools like this, and there’s no clear timeline for wider access to such internal tools.”
Wenting Zheng, an assistant professor of computer science at Carnegie Mellon University, agrees, suggesting that OpenAI should share its detection tools with platforms like Meta and X to help identify AI-generated content on social media. Though, when asked about this possibility, an OpenAI spokesperson directed NBC News to a general technical report that lacks specific details.
AI vs. AI: The Future of Deepfake Detection
Some companies are turning to AI itself to detect AI-generated content. Ben Colman,CEO and co-founder of Reality Defender,a deepfake-detecting startup,explained that “Human beings – even those trained on the problem,as some of our competitors are – are faulty and wrong,missing the unseeable or unhearable.”
At Reality Defender, “AI is used to detect AI,” Colman stated. He believes that AI can identify subtle cues in videos that humans cannot,making it a more effective detection method. Similarly, McAfee’s Scam Detector software analyzes a video’s audio for “AI fingerprints” to determine its authenticity, according to Steve Grobman, chief technology officer at McAfee.
However, Grobman cautioned that “new tools are making fake video and audio look more real all the time,” and that 1 in 5 peopel have already been victims of a deepfake scam.
The Digital Divide and the Evolution of Deepfakes
The quality of deepfakes varies across languages, with AI tools performing considerably better in commonly used languages like English, Spanish, and Mandarin than in less common ones. McAfee is actively working to address this disparity. “We are regularly evolving the technology as new AI tools come out, and expanding beyond English so more languages and contexts are covered,” Grobman said.
Concerns about deepfakes were especially high leading up to the 2024 elections, with many predicting a surge in AI-generated misinformation. While this didn’t fully materialize, the increasing realism of AI-generated media is raising new alarms.
Until recently, AI-generated content was generally distinguishable from real content. However, models released in 2025, such as Google’s Veo 3, have been described as ”terrifyingly accurate” and “dangerously lifelike,” prompting some to question whether we are approaching a point where discerning reality from fabrication will be unfeasible. One reviewer even asked, “Are we doomed?“
