Facebook changes the way it flags fake news stories

Facebook has recently been making an effort to hold back the tide of fake news—which, as always, means news that is literally untrue, not news that is critical of Donald Trump—with the site announcing a feature last year that notify a user if a story’s veracity had been called into question by enough people. Like most things Facebook attempts, though, that system didn’t end up working particularly well. As explained in a piece for The Verge, that system was slow, it would ignore important context for how or why something was fake, and the language used would sometimes “backfire” by making people think the system was biased.

Now, Facebook is replacing that system with a new one that will bring up related news stories that indicate they’ve been fact-checked. Basically, if you click on a news story on Facebook that says “Donald Trump is a pig monster,” a box of related stories with verified sources will automatically pop up, most likely indicating that you can definitely trust the accuracy of a story about Trump being a pig monster. It will also use “non-judgmental” language that respects people with “diverse perspectives,” meaning it’ll try not to make you feel silly for not trusting the fact-checked story about the guy being a pig monster.

In studies, Facebook said that this system didn’t necessarily reduce how often people would click on a fake story, but it did lead to fake articles being shared fewer times. That’s basically an improvement, and projects like this have reportedly reduced the traffic going to fake sites significantly.

 
Join the discussion...