Main menu

Pages

Facebook fights fake news articles



Facebook has recruited a team of third-party fact checkers to report fake news articles being shared on its network. It was part of a growing effort to curb the spread of disinformation on the site in the wake of Russia's interference in the 2016 presidential election.


Fake news articles now receive a disputed mark when they are published in the Facebook News Feed. But this is not the case if the user sends the same article to an individual or group through a messenger.


However, one Facebook user claims recently that they experienced the opposite. After sharing a Breitbart story to someone via a messenger, the person says they received a notification that reads, “Link I have shared contains disputed information by Politifact,” meaning that they were sharing a fake news story.


However, individual Mashable said that they have not published or even attempted to share the article to their news feed. (The individual requested anonymity due to the polarizing nature of the story.)


Mashable reached Facebook for the comment, and we were told by a company spokesperson that the person must have suffered from a bug. If it is true, this is the case where the bug revealed an inherent bug in Facebook's fact-checking system.


If Facebook were truly committed to curbing fake news on its platform, it would address the problem across all areas of the site, not just Facebook Newsfeed. According to the company, more than 1.3 billion people use Messenger every month, and we know at least some fake news articles have been shared on it.


One of the reasons a company might not screen private messenger chats for fake news could be that they don't want them to appear "creepy". For example, Facebook drew criticism when it initially announced that it would use WhatsApp data to inform the company's expanded ad network (although it did not pull the information from private messages). Facebook has repeatedly insisted that it does not screen private conversations for the advertisement.


For the most part, the private areas of Facebook's products, which include services like Messenger and WhatsApp, remain untouched. An exception is that Facebook uses automated tools like Photodna to scan for shared child exploit photos within Messenger. But there is no system currently used to detect fake news within Messenger or WhatsApp.


A fact-checking partner on Facebook, Poynter, recently explored how fighting fake news on WhatsApp is still difficult due to the closed nature of the network.


"WhatsApp is designed to keep people's information safe and private, so no one is able to access the contents of people's messages," and the policy communications WhatsApp leads Carl Waugh in an email to Poynter. "We realize that there is a challenge fake news, and we are thinking of ways we can continue to keep WhatsApp safe."


Of course, that could change in the future.


A Facebook spokesperson said the company was working on new and increasingly effective ways to fight false news stories on all of its apps and services. But until then, it appears Facebook users will have to do their own fact-checking.

Comments

table of contents title