Exposed: YouTube Unmasks Celebrity Deepfakes with Groundbreaking AI Copy Detection Tool
YouTube Develops AI Technology to Prevent Deepfakes and Protect Creators’ Rights
YouTube is taking steps to prevent deepfakes using artificial intelligence (AI). With the advent of the AI era and the proliferation of deepfake content that forges the faces and voices of celebrities, YouTube has released a tool to detect them.
According to YouTube’s official blog, they are developing ‘similar voice identification technology’. Amjad Hanif, YouTube’s Vice President of Creator Products, said, “AI should serve as a tool to enhance human creativity,” and “We will help creators express themselves in more innovative ways through AI.” Vice President Hanif announced that they are developing a new ‘similar voice identification technology’ so that AI-generated content can be managed under the creator’s control.
The technology allows AI to automatically detect and manage content that imitates the creator’s voice. The technology will be officially applied early next year. YouTube also announced that it is developing technology that will allow creators, actors, musicians, athletes, and other industry figures to detect and manage when their faces are used in AI-generated content.
In addition, the company emphasized that it will provide tools to prevent AI technology from being misused through privacy updates. It also prohibits AI companies from learning YouTube content without permission. YouTube has been using content uploaded to the platform to improve it through machine learning and AI applications.
However, it explained that it is responding strongly to third parties scraping creators’ content without permission and is continuously investing in systems to prevent this. Vice President Hanif warned, “Third parties using creators’ content without permission is a violation of the terms of service and an act that damages the value received by creators.”
YouTube announced that it is developing new ways to give creators more choice when collaborating with third parties and developing AI tools. It plans to provide tools that allow creators to decide how their content is used by third parties, and more details on this will be released later this year.
AI-Generated Music Scandal
A man in North Carolina, USA, was found to have used artificial intelligence (AI) to create hundreds of thousands of fake songs and disguised himself as a fake band. He also uploaded them to a streaming service and had a fake audience listen to them, US prosecutors claimed.
The suspect, 52-year-old Michael Smith, said he unfairly collected royalties from digital streaming platforms such as Amazon Music, Apple Music, and Spotify for seven years. Smith played fake music he created himself using AI billions of times, and earned $10 million.
“Smith was actually stealing royalties that should have been paid to the musicians and songwriters who owned the rights to the music that was being streamed,” said Damien Williams, the U.S. attorney for the Southern District of New York.
The methods he employed were: △AI music channel creation, △AI music content production, and △using thousands of fake accounts to automatically repeat music through the program. The prosecution estimated that he could earn $3,307 in royalties every day and $1.2 million a year.
