How & Why Facebook Is Penalizing Users Who Share Misinformation

How & Why Facebook Is Penalizing Users Who Share Misinformation

Facebook has just announced three new tools to combat misinformation on the social media app. They’re designed to better inform people about misleading content, prevent individuals from spreading it, and offer more context about why certain posts are flagged.

Misinformation has become a rampant issue for just about every social platform. Among all of them, however, Facebook has struggled the most. Multiple studies have reiterated this point, including one from October 2020, which revealed that misinformation on Facebook had gotten substantially worse compared to its presence during the 2016 election. Facebook has made numerous attempts to stop the spread of this content, but no matter what the company does, misleading posts continue to flood it.

Despite the uphill battle, Facebook recently unveiled three new ways it’s trying to clamp down even harder on misinformation. When someone goes to like a page that has repeatedly shared content flagged by Facebook’s fact-checkers, they’ll now get a pop-up alerting them that the page is known for spreading false info. Users can choose to follow/like the page anyway if they want, but the hope is that it’ll deter these pages from getting as much traction as they otherwise would. In addition to larger pages, Facebook is also taking a stronger approach to personal accounts. Facebook currently reduces the reach of individual posts from users that have been fact-checked, but starting today, Facebook will now “reduce the distribution of all posts in [the] News Feed from an individual’s Facebook account if they repeatedly share content that has been rated by one of our fact-checking partners.” Last but not least, Facebook is improving the notifications it sends to users when alerting them that a post has been fact-checked. These notifications will now include the article used to debunk the post, a shortcut to share the article to let others know the post was inaccurate, and warnings that repeated violations could result in that person’s posts being presented less often on the News Feed.

Why These Changes Do (And Don’t) Matter

How & Why Facebook Is Penalizing Users Who Share Misinformation

Some people will look at these changes as a valiant effort to stop misinformation, whereas others will likely see it as a bare minimum effort from Facebook. At the end of the day, it’s likely a mix of the two. There’s no denying that Facebook has an increasingly dire misinformation problem, and if these changes help even just a little bit in stopping the spread of it, that should be considered a win.

At the same time, however, these changes won’t do anything for the Facebook users that are already deeply entrenched in the lies they’ve been feeding/listening to for weeks, months, and years. A warning message here and there isn’t going to suddenly stop them from sharing content they believe is real, and if they get too fed up with Facebook’s policies, they’ll just go and share those things somewhere else.

These three updates won’t make misinformation disappear from Facebook any time soon, and at this point, it’s hard to imagine if that’s even possible. It’s something that Facebook and other social platforms will have to keep fighting for years to come. In the meantime, small changes like this are still good to see. They may not be the final answer to the problem, but every little bit helps.