“You can make up for facts,” the ChatGPT developer warns [횡설수설/서정보]

I personally asked ChatGPT, an interactive artificial intelligence (AI) service program, to write an article analyzing last year’s Korean presidential election. In less than 10 seconds, I drew a piece of A4 paper, but the conclusion was wrong. That is for President Moon Jae-in to succeed in being re-elected by winning comfortably in the conflict with the candidate Hong Joon-pyo. I asked the same question again. This time, President Moon replied that he won 51.04% of the votes and defeated Candidate Chul-Soo Ahn, who only got 48.96%. When I asked if President Seok-yeol Yoon had won, he replied that Seok-yeol Yoon was not a candidate at the time.

▷ChatGPT, launched in December last year by the US startup OpenAI, is an overwhelming topic these days. There are many stories of him passing professional exams such as the US medical exam, bar exam, and MBA, and writing master’s level theses and congressional speeches. When a user asks a question based on incorrect facts, he points out the error or freely provides emotional texts such as song lyrics and poetry. Google, which lost its players to ChatGPT, decided to launch ‘Bard’, which provides a similar service, and Naver also plans to launch ‘Search GPT’ in the first half of the year.

▷ChatGPT made a great appearance, but there are not a few shadows on the other side. ChatGPT, demo version, is only taught until 2021, so we don’t know the latest situation. In addition, since they are forced to answer without saying that they do not know the facts that have not been studied, there is a high possibility of giving a false answer such as an analysis of the Korean presidential election 2022. Specifically, in fields such as medicine and education, trusting artificial intelligence without knowing it can lead to serious consequences. Recently, when an American doctor gave the patient’s age, gender, symptoms, etc. to ChatGPT, it came up with a specific disease name, but it was a wrong diagnosis, and there is also a case that the research paper presented as a basis is also false.

It is in the same context that Mira Murati, the chief technology officer (CTO) of ChatGPT, said in a recent interview with Time magazine that “AI can falsify facts.” This is still a limitation of the AI ​​from language-based models. CTO Murati further said that regulations to block the room for abuse by bad people should start now. It is a warning that AI ethics and regulations that keep up with the rapid technological development are essential to manage AI in accordance with human values.

▷AlphaGo, which appeared like a comet in 2016, shook the Go world by beating Lee Se-dol, 9th Dan. Since then, many AI Go programs have appeared and overwhelmed humans with their skills. However, human skills that have absorbed AI skills are also making progress. Artificial intelligence services like ChatGPT also have many problems, but if they are solved one by one with technological progress, they show the possibility of fundamentally changing human life. The key is what kind of safety devices are in place and how they are used.

Seo Bo-bo, Editorial Writer, suhchoi@donga.com

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.