Exposed: The Dark Side of AI – Where is the Deepfake Sex Abuse Crisis Heading
AI Ethics Letter: September 2024
1. The Growing Concern of Deepfake Sexual Exploitation
The recent report on deepfake sexual exploitation has sparked a heated debate and raised concerns about the potential risks and consequences of this technology. While some argue that it is a side effect of AI technology, others believe that it is a predictable outcome of the development of such technology. The issue has gained significant attention, with local governments and government departments taking action to report damage and remove the content immediately.
The police and prosecution are carrying out strict crackdowns, investigations, and punishments, while the political world is moving to establish a legal system to strengthen related punishment provisions. There are also voices calling for regulation of platforms that can become distribution channels for sexually exploitative material, such as Telegram. Technical measures such as deepfake detection and watermark insertion into AI-generated content are also being considered.
However, it is essential to remember that this is a multi-layered and complex problem that requires multi-faceted and step-by-step efforts to eradicate related crimes. We must not forget that AI technology is developing in a way that makes such problems more likely to occur and spread.
2. The Correlation between AI Regulation Bills and AI Hype
A bill in California, USA, has sparked heated debate in the AI industry, with some arguing that it contains relatively strong regulations. The bill, known as the Frontier AI Model Act (SB1047), requires AI model developers of a certain size or larger to introduce an ’emergency stop’ function before model learning and passing safety tests before distribution.
While some argue that the bill is necessary to prevent the risks associated with AI, others believe that it hinders innovation. The debate surrounding the bill raises questions about the realism of the risks highlighted and the application of regulations to multipurpose and multipurpose AI models.
The bill’s content is clearly influenced by the AI hype that has been emphasizing the dangers of superintelligence. However, it is essential to consider the contradictory appearance of companies that have been actively promoting the AI hype and calling for regulation, yet opposing it.
3. The Wrong Answer: LA’s Experience with AI Digital Textbooks
The government’s push for AI digital textbooks has been met with strong backlash, with concerns about the unproven effectiveness of AI digital textbooks and the appropriateness of collecting and using classroom data, including personal information of students and teachers.
Los Angeles (LA), the second largest public education district in the United States, has faced difficulties after introducing a program similar to our country’s AI digital textbook. The startup that developed the AI learning platform Ed went bankrupt within months, and all its advantages were overshadowed by controversies over data leaks and inappropriate use.
The LA case shows that the issues raised by the public interest committee are not simply groundless concerns, but very realistic points. AI is often considered a magic word that can solve problems on the front lines of education, but that is an ideal that can only be claimed in various AI hype advertisements.
4. The EU AI Law: A Step towards Regulation
The EU AI law has come into effect and is set to be implemented next year or the year after, depending on its provisions. The law covers all areas of AI, and there is expected to be considerable confusion about how to interpret its contents and what is needed to comply with the law.
The EU is preparing the AI Pact, a voluntary agreement to prepare for the application of the AI law. The AI Pact focuses on building a network to share case studies and other practices so that AI development companies and other legal entities can prepare for the implementation of the system.
The AI Pact covers a wide range of topics, from management-level content such as the organization’s AI governance and AI literacy promotion strategies to practical content such as legal methods for securing AI learning data and human supervision mechanisms for AI systems.
The EU continues to increase the number of companies participating in the AI Pact, with Aleph Alpha, a leading German AI company, putting transparency and legal compliance at the forefront. This movement could be a reference for our country, which is just postponing legislative discussions.
