How Facebook’s AI is struggling to uncover misinformation24. June 2020
How Facebook’s AI is struggling to uncover misinformation
New York, 24.6.2020
Automated moderation is a blunt instrument. This is inevitably the case when users of Facebook try to post a picture of Aboriginal men in chains.
Asked how Facebook reacts to content that is considered “inappropriate”, Mark Zuckerberg explained: “It is much easier to set up an AI system that can detect a nipple than to determine what linguistic hate speech is”. That was 2018, and the AI to identify nudity, which Facebook sent into the field, made the right decision in most cases. Between January and March of this year, Facebook removed 39.5 million items of adult content for nudity or sexual activity – 99.2% of which were automatically removed. But: there were 2.5 million objections to the removal and 613,000 content items were restored.
Then some experts found out that Facebook’s AI has problems with historical photos and paintings. Here’s a concrete example. Facebook suspended a user last week for posting a picture of Aboriginal men in chains from the 1890s. This was in response to the Australian Prime Minister’s assertion that there is no slavery in Australia.
Facebook acknowledged that blocking the image posting was a mistake and restored it.
The picture was on the White List, but Facebook’s systems did not apply the same White List to the exchange of articles with that picture, resulting in a seemingly endless cycle of penalties.
It was an AI mistake, but for people who tried to share the image or story, it seemed as if Facebook took a wrong, hardline position on a particular issue, while other posts – including the inflammatory posts by U.S. President Donald Trump – remained untouched.
There is no doubt that Facebook has a moderation problem. Given the terrible stories of third-party moderators suffering from post-traumatic stress disorder because they have to check content on Facebook all day long – for which tens of thousands are now demanding compensation – it is no surprise that Facebook is trying to automate everything.
It will be interesting to see what mistakes Facebook makes in using AI with image scanning to limit the amount of misinformation that is spread on its platform in connection with the Covid 19 pandemic.
In April, the company added 50 million posts of misinformation labels related to Covid-19 based on fact-checks of 7,500 items. It also removed 2.5 million products such as face masks and hand sanitizers from the marketplace since March 1.
Once a claim is verified and classified as misinformation, the image used in the original mail is scanned so that it can be resumed in the future when people try to pass it on. Identical images are tagged with a fact-checking label, whether or not these images are linked to articles.
But a cursory glance at one of the groups still promoting conspiracy theories related to Covid shows that a lot of misinformation slips through the cracks. There is no evidence yet that Facebook is “over-censoring” misinformation, despite what some in these groups may claim.
A platform so large that it struggles to determine what content is appropriate and what isn’t appropriate in different cultures would be better reduced to a smaller scale so that standards are appropriate for the communities in which it operates.