Facebook apparently has little interest in identifying hate speech before it is published

Facebook apparently has little interest in identifying hate speech before it is published

12. Juli 2020 0 Von Horst Buchwald

Facebook apparently has little interest in identifying hate speech before it is published

New York, July 12, 2020

Even as late as 2018, Mark Zuckerberg was convinced that it would be easier to develop an AI „that can detect a nipple than to determine what linguistic hate speech is“. This hardly controversial positioning has led to the development of an AI commissioned by Facebook.

Analysis released over the weekend, the Zuckerberg company says it needs to improve its AI tools to identify hate speech and other dangerous content.

The report highlights that Facebook continues to lag behind in its efforts to identify algorithmic biases. Facebook is not sufficiently prepared for its algorithms to inadvertently inject „extreme and polarizing content,“ the report continues.

The authors, civil rights lawyers Laura Murphy and Megan Cacace, described the company’s current approach to regulation as „too reactive and piecemeal. They also stress that „Facebook has a responsibility to ensure that its machine learning algorithms and models do not lead to „unfair or detrimental consequences“ or „reinforce existing stereotypes or inequalities.

The report urged Facebook to more quickly improve its AI tools for eradicating harmful content, its ability to predict advertising and other AI issues, and diversity within its AI teams.

In a blog posting, COO Sheryl Sandberg responded by stating that Facebook would „not seek quick fixes“. She claimed that Facebook is the first social medium to make such a test.

Adam Singolda, CEO of Taboola, also criticized Squawk Box: Facebook should hire more human content reviewers and not be dependent on AI. Last month, Facebook was forced to apologize after its AI accidentally blocked an image of Aboriginal men in chains from the 1890s. The reason: it was inappropriate nudity.