Haugen: Astronomical profits are Facebook’s priority

Haugen: Astronomical profits are Facebook’s priority

10. Oktober 2021 0 Von Horst Buchwald

Haugen: Astronomical profits are Facebook’s priority

San Francisco, 10/10/2021

An amendment to Section 230 could help hold Facebook accountable for its algorithms that promote content based on user interest, according to whistleblower Frances Haugen.

Facebook’s AI systems prioritize content in news feeds based on how many users like, share or comment on it. But machine learning algorithms are known to create a „powerful feedback loop.“ Haugen detailed how the algorithms intentionally show users content that is likely to make them angry, which generates more engagement than other emotions.

Haugen, a data expert, was recruited to Facebook’s „Civic Integrity“ team in 2019. She leaked tens of thousands of internal documents to the „Wall Street Journal“ that formed the basis for the investigative series „Facebook Files.“

She appeared on the program „60 Minutes“ and thus revealed her identity. She also testified in the US Congress. One of her statements was that Facebook’s „astronomical profits“ always took precedence over the well-being of its users.

According to Haugen, Facebook’s management knew that if it changed its algorithms, fewer people would click on ads and profits would fall.

Haugen believes that Facebook should be held liable for its algorithmic decisions.

A bill introduced in the U.S. – House of Representatives, the Protecting Americans from Dangerous Algorithms Act, addresses some of these concerns. However, amendments to Section 230 and privacy protections are not enough to solve the crisis, she stressed.

Other proposals include establishing a federal oversight agency to analyze Facebook’s internal software experiments and requiring the company to release privacy-compliant data about its website that outside researchers can analyze.

Haugen’s comments echo an article penned by MIT Tech Review’s Karen Hao and published in March about Facebook’s failed attempts to control misinformation using machine learning.

Hao describes how the company’s growth-oriented focus led it to discard AI models that reduced user engagement and retain those that supported it – which typically „foster controversy, misinformation, and extremism.“

The article isn’t about corruption, she says, but about good people „trapped in a rotten system, doing their best to enforce the status quo that can’t be changed.“