The problematic-content filter backfired, increasing the distribution of ‘misinformation’
Social media giant Facebook’s content filters, which are meant to downrank harmful content and ‘misinformation’, have been malfunctioning for six months, according to an internal report, raising the familiar question of who will watch the watchmen.
Read more
As many as half of News Feed views were subject to potential “integrity risks” over the last six months, an internal report circulated last month and seen by The Verge on Friday revealed. According to the report, engineers first noticed the problem in October but could not find the cause, observing regular flare-ups for weeks at a time until they were finally – allegedly – able to get it under control on March 11.
Those “integrity risks” weren’t just flare-ups of “fake news,” though that was the symptom that first tipped the company’s engineers off. Rather than suppress repeat offenders previously red-flagged by Facebook’s fact-checkers for spreading ‘misinformation’, the News Feed was boosting their distribution, leading to as many as 30% more views for any given post.
Facebook’s filters were also failing to de-rank nudity, violence, and Russian state media, which the platform placed on the same level of offensiveness as nudity and violence. Following Meta’s announcement that it will no longer restrict calls for violence against Russians in the context of the Ukrainian conflict, Moscow’s media regulator Roskomnadzor blocked access to Facebook and Instagram in Russia more than a week before Meta engineers were allegedly able to figure out why the platforms were bolstering harmful content.
The internal report indicated the filtering problem actually dated back to 2019. Meta spokesman Joe Osborne told The Verge the company “detected inconsistencies in downranking on five separate occasions, which correlated with small, temporary increases to internal metrics,” but that the issue didn’t have a “noticeable impact” until October. Contrary to the report, Osborne insisted the bug “has not had any meaningful, long-term impact on our metrics” and didn’t apply to “content that met its system’s threshold for deletion.”
The confusion over the longstanding presence of the ‘bug’ shines a light on the growing body of content Facebook subjects to deranking. No longer suppressing just rule-breaking content, the platform’s algorithms also target “borderline” content that supposedly comes close to breaking its rules, plus other content the AI flags as potentially violating but requiring human review to be sure.
Even CEO Mark Zuckerberg acknowledged that users are driven to engage with “more sensationalist and provocative” content, and the company bragged last year that all political content would be downranked in News Feed.
READ MORE: Meta sends message to ‘ordinary Russians’
Facebook – now known as Meta – has recently come under the microscope for the $400 million it poured into the 2020 election. Those funds were almost exclusively routed toward districts won by then-candidate Joe Biden. While the recipients of the funds were nonprofit organizations, forbidden by the IRS from supporting a specific political candidate or party, the groups were staffed by Democratic operatives, including former Hillary Clinton and Barack Obama strategists.
Congressional Republicans have also demanded documents and communications regarding Meta’s efforts to suppress the Hunter Biden laptop story, which would have made an incriminating “October surprise” against then-candidate Joe Biden had it not been unilaterally suppressed by Facebook and Twitter.
0 Comments