Written by New York Times reporters Sira Frankl and Cecilia Kang is a book that delves deeply into the point where big tech companies in Silicon Valley collide with traditional media outlets through a service called Facebook, especially focusing on the 2016 US presidential election. Is what the algorithm shows us fair? If it’s not fair, who should oversee it?
We expect the media to fairly report the truth that the public needs to know. Media companies are also companies, so they have to make a profit, and to make a profit, they place advertisements. However, the media companies that exaggerate or reduce the content of the articles in order to catch the attention of advertisers are criticized. If an incorrect article is published, ethical responsibility is held.
What if the contents of the newspaper were filled with blog posts that anyone could write and whose authenticity had not been verified, and what if they were edited based on the reader’s usual tendencies without distinction between advertisement pages and pages? It uses an artificial intelligence algorithm to understand the propensity of each reader, and places posts or advertisements that readers are likely to like most in the most visible places. There is no guarantee that this paper will give fair coverage, but wouldn’t people enjoy it more and read it longer? Today’s Facebook news feed is just that.
The algorithm collects all the information on which posts you clicked ‘Like’ and ‘Share’, which posts you commented on, and which posts you read for a long time, and only shows posts with content that users are likely to like first. The authenticity or ripple effect of the content is not considered. As a result, there are users who create fake news while worrying about how to get more clicks on their posts and their ads.
Ahead of the 2016 US presidential election, Russian hackers were well aware of the system and sent posts and advertisements more favorable to certain candidates to many voters. It cost about 100 million won to promote it to about 126 million Americans, so it’s very effective. These promotional articles were a mixture of fake and real, and it was difficult to distinguish the real from the fake because they contained information obtained through illegal hacking.
In the meantime, Facebook and other big tech companies have taken the view that the algorithm is applied equally to all articles, so the platform is only a medium of communication and not responsible for the content. This view fit well with the sentiments of the American people in that it defends freedom of expression and opposes censorship. However, following the attack on the Capitol in January of this year, a consensus was formed that censorship of illegal content uploaded to social networking services (SNS) is necessary.
The difficult process starts from here. How will the algorithm be modified? Artificial intelligence algorithms are not perfect. Who will be responsible for illegal content that the algorithm misses? The innovation brought about by deep learning and artificial intelligence and the use of the Internet to transcend time and space and connect with people are all attractive. However, in order to safely enjoy sweet fruits, supervision and responsibility are required, just like regulating the use of pesticides.