An internal document was released stating that the accuracy of the artificial intelligence (AI) introduced by Facebook to manage hazardous materials was greatly reduced. Facebook’s crisis, which has emerged as a result of a series of internal revelations, such as the controversy over the harmfulness of youth by its subsidiary, Instagram, is getting more and more serious.
The Wall Street Journal (WSJ), an American daily, reported on the 17th (local time) that the AI introduced by Facebook in 2016 “cannot properly detect posts with harmful content such as hate speech or first-person shooting videos” obtained and reported.
According to the document, Facebook’s AI did little to catch hate speech. In 2019, Facebook’s chief engineer said, “AI removed only 2% of all hate speech that violated regulations. In March of this year, another internal document also revealed that AI deleted only 3-5% of hate speech views. The detection rate of content that violated Facebook’s prohibition of incitement to violence was also found to be only 0.6%.
The function to filter harmful images did not work properly either. In 2018, a Facebook engineer saw videos of horrific traffic accidents and cockfights spreading online, and used AI to filter the videos. However, the AI only recognized the cockfight as a car crash. He couldn’t tell the difference between a chicken that was badly injured in the fight and a chicken that had minor injuries.
They even misunderstood a video of a shooting game or a car wash as a “first-person shooting” video. In March 2019, in Christchurch, New Zealand, a terrorist shot and killed 51 people at two mosques and broadcast his crimes live on Facebook from a ‘first-person view’. Attempts were made to filter out the first-person shooter footage, but it was virtually unsuccessful.
Facebook immediately countered the WSJ report. Through its homepage, Facebook explained, “In the case of content that is inappropriate but difficult to determine whether it is in violation of regulations, we are working to prevent the spread, such as reducing the frequency of exposure instead of deleting it.”
Previously, Facebook invested 13 billion dollars (about 15.4 trillion won) in 2016 to build AI after users complained about its content management system. Last month, it said, “AI blocked 3 billion fake accounts in the first half of this year, and removed 15 times more violating content than in 2017.”
Facebook, the world’s largest social media, halted development after an internal document was released last month stating that its subsidiary, Instagram, had confirmed the harmful effects of Instagram for teenagers and pushed for development of Instagram for children. In addition, it was embroiled in morality controversy as it led to whistleblowers accusing it of “not taking action even though it was aware of the harmfulness, such as neglecting harmful posts and fake news for astronomical interests.”
Intern reporter Kim Ji-woo
Issues you may be interested in