Facebook: With automated AI against hate speech and misinformation – is that enough?14. August 2020
Facebook: With automated AI against hate speech and misinformation – is that enough?
New York, August 14, 2020
Just the increase in hate posts for the period from the 1st quarter to the 2nd quarter of 2020 that Facebook registered is frightening: 134% was. This calls for an explanation. Here it is: improved AI detection systems and better automation technology for the Spanish, Arab and Indonesian posts should explain this figure, Facebook said in a statement.
Guy Rosen, Facebook’s vice president of integrity, told CBS reporters over the phone that the company will move “to process more content first through our automated systems” but will continue to rely on humans to review posts and train artificial intelligence.
According to its Community Standards Enforcement Report, Facebook took action against 22.2 million posts containing hate speech between April and June. The number of terrorism related posts that Facebook has taken action against rose from 6.3 million in the first three months of the year to 8.7 million in the second quarter.
Relying on automated systems is a major risk, according to Emma Llanso, director of the Free Expression Project at the Center for Democracy and Technology. Her explanation: “Any kind of content moderation carries a risk of error, but when we talk about automated systems, this risk is multiplied enormously because you use automated recognition tools for each content of a service,” says Llanso.
There is a danger, says Llanso, that such a system is too broadly based. Since the content moderation led by AI “essentially filters out everyone’s content” and “exposes everyone’s contributions to some kind of judgement”, Llanso says, “this approach poses many risks to freedom of expression and privacy”. It also means that if there are errors in the automated tools, “the impact of those errors is felt across the platform,” Llanso added.
Artificial intelligence has further limitations, as it is less easily used to investigate self-incriminating content. The number of actions taken against posts containing content related to suicide and self-injury, child nudity and sexual exploitation fell from 1.7 million in the first quarter to 911,000 in the second quarter, despite Facebook saying it had removed or flagged the most harmful content, such as live videos showing self-injury.
Llanso said that “even in a system like Facebook’s where a lot of automation is used, human moderators are an absolutely essential component”.
In recent months, Facebook has also been fighting misinformation about the virus. From April to June, the company removed over 7 million COVID-19-related pieces of information it considered harmful on Facebook and Instagram. Rosen described these posts as those that “propagate fake preventative measures or exaggerated remedies that the CDC and other health experts tell us are dangerous.
On Wednesday, technology and social media companies, including Facebook, Twitter, Google and Microsoft, issued a joint statement declaring that they had met with government partners to keep them informed about how to combat misinformation on their platforms.
“We discussed preparations for the upcoming congresses and scenario planning in connection with the election results,” the joint statement said.