As generative artificial intelligence (AI) technology advances, the abuse of technology is emerging. This is because fake news is generated through an AI program that generates images, and original copyrighted content is copied in an instant. In response, it is noted that domestic and foreign efforts are needed to look for ways to ensure the reliability of AI, such as ‘AI ethics’, while each country pulls out the regulatory knife.
Copying a popular YouTube concept… copyright infringement controversy
According to the industry on the 2nd, the risk of fake news is also increasing along with the development of productive AI technology.
There are concerns that using AI services that generate images will increase the influence of fake news as it can create realistic images. In fact, recently, a photo of Pope Francis wearing a long white padded jacket and walking around St. Peter’s Square in Vatican City was released on social media and became a hot topic. However, according to foreign media such as CNBC in the United States, it was revealed that the picture was created using AI ‘Midjourney’ which generates images.
YouTube ‘copycat (imitation)’ content using generative AI is also becoming a problem.
Recently, in Korea, the case of a copycat channel that copied a popular YouTube concept through an AI program became an issue. The channel used an AI program called ‘Noah AI’ to recommend videos with high views or attract attention on YouTube. After that, Naver AI voice recording service ‘Clovanote’ is known to have been used to draw video scripts and scripts were written through ‘Luten’, a creation AI related to writing.
Cho Won-hee, a representative attorney at the law firm Dlight, said, “The AI technology that created the content or the AI program itself is not the problem,” but “there is considerable similarity in the original images, videos, and concepts , so copyright infringement appears clear, and future AI copyright issues will become more complex.”
■ Regulation alone cannot solve … Prepare government and business measures
Countries are deepening their concerns about how to use productive AI. Regulatory moves are also active.
The European Union (EU) proposed the world’s first ‘AI Act’ in 2021. The aim is to guide regulations such as setting supplier obligations, conformity assessment and certification, focusing on high-risk AI. The UK government also recently published an AI white paper and urged regulators to develop a contextual approach tailored to the way AI is used.
However, there is also a skeptical view that regulations and other laws alone cannot solve the problem. This is because the pace of technological development is so fast that not only can all cases be covered, but also domestic companies can lag behind in AI technological hegemony.
Accordingly, the movement to establish ‘AI ethics’, the ethics necessary for the development and use of AI, is accelerating. Efforts are underway to ensure the reliability of AI in Korea as well. The Ministry of Science and ICT launched the AI Artificial Intelligence Ethics Policy Forum last year. This is a forum where experts from different fields, such as AI, ethics, education, legal system, and the public, participate to promote discussions on the ethical development and use of AI, and the ethical issues of AI are discussed.
There are also efforts to establish AI ethics at the corporate level. Representatively, Naver, which is developing AI ‘Hyperclova’ on a large scale, announced the AI ethics rule in 2021. The AI Code of Ethics is a principle that all members of Naver must adhere to when developing and using AI △ Developing AI for people △ Respect for diversity △ Harmony between reasonable explanation and convenience △ Service design considering security △ Protecting privacy and information security It contains a total of 5 articles.
email@example.com Reporter Lim Soo-bin and Kim Mi-hee