How Fake News Is Still Fooling Facebook’s Fact-Checking Systems

By Will Oremus

IfIf the good news is that Facebook’s systems are having at least some effect, the bad news is that they’re far from foolproof. And unless Facebook continually improves them, propagandists will likely only get better at finding ways around them.

The Pelosi story offers one instructive example. It has been widely debunked, including by at least two of Facebook’s official fact-checking partners, Politifact and FactCheck.org. Yet, when you go to post the article link to Facebook, the platform offers no warning, no hint that it might be bogus. Likewise, when it appears in your News Feed, nothing indicates that it’s false.

Facebook couldn’t say definitively why one of the most viral political articles on its platform remained untouched by its fact-checking warnings months after it was published, and even for weeks after Avaaz’s study called attention to it. But there are at least two possible culprits.

First, it appears that Politifact’s fact check was applied in Facebook’s system to a different version of the false Pelosi claim, one that appeared in the form of a photo with overlaid text, rather than an article link. Second, the article version may have skirted a fact check in part because the outlet that published it — PotatriotsUnite.com — identifies as a “satire” site, a label that experts say has become a popular fig leaf for misinformation merchants. That’s a label you’ll see if you click the link in your News Feed, and stop to pay attention to the site’s tagline, URL, or “About” page, rather than simply reacting to the headline and story itself, as so many people do. (There is also a watermark on the image accompanying the story that includes the word “satire,” though it’s so tiny as to be barely legible.)

Satire presents a quandary for Facebook’s fact-checkers: Slap earnest warning labels on every Onion story and suppress users’ ability to share it, and you essentially eradicate political humor from the platform, while insulting the intelligence of millions of Onion fans who are in on the joke. But what about self-described “satire” sites whose headlines seem calculated to mislead and inflame rather than amuse? Ideally, the fake news publishers who ran rampant on Facebook in 2016 would not be free to churn out the same divisive lies in 2020 simply by adding fine-print disclaimers. That said, it’s not clear how to objectively distinguish between legitimate satire and manipulative propaganda that’s posing as satire. And by the way, that isn’t a new problem: The site that published the infamous “Pope Endorses Donald Trump” story claimed to be satire, too.

Facebook’s compromise has been to offer fact-checkers the option to label a post as “satire,” which its systems will treat differently from those labeled misinformation. When someone shares a post labeled satire in the News Feed, it will come with a list of related articles that includes a debunking, but will not trigger warnings when shared or have its reach curtailed by the algorithm. But experts say, and Facebook acknowledges, the company’s fact-checking partners have limited resources, and generally prefer to spend them fighting serious falsehoods rather than harmless jokes.

Avaaz’s Quran said that has not escaped the notice of the people trying to spread lies. “If all you have to do is write somewhere that you’re satire and get away from the whole fact-checking process, then we’re in big trouble,” he said. Part of the problem is that the News Feed presents every story in the same format, with the headline and image visually emphasized while the name of the publisher appears in smaller type— a New York Times story looks basically the same as a story from PotatriotsUnite.com. That, of course, is part of what made Facebook such fertile ground for fake news and propaganda in the first place, and its fact-checking systems have done little to address that underlying issue.

The Pelosi story isn’t the only example of this loophole, nor even the only one from a publisher whose name is inexplicably potato-themed. It was the second-most-shared story in Avaaz’s sample, with nearly 1.4 million interactions. The sixth-most-shared false story—headlined “Ocasio-Cortez Proposes Nationwide Motorcycle Ban”—links to a site called TatersGonnaTate.com, which belongs to the same network as PotatriotsUnite.com. Posted to a Facebook group called Southern War Cry, which has some 113,000 members, the most-liked comments give no indication that the story was received as satire. One earnestly defends motorcycles’ role in society and questions Rep. Alexandria Ocasio-Cortez’s intelligence. Another commenter wonders what the congresswoman’s reasoning might be, then suggests sexually assaulting her.

That Facebook is increasingly pushing Groups as a supplement or even an alternative to the main News Feed could exacerbate the misinformation problem, as like-minded users egg each other on in private settings where there are no dissenters or skeptics motivated to correct the record. (Facebook says it takes action against Pages and Groups that “repeatedly share misinformation,” including making them less visible in notifications and search results. But it has more levers to deploy against Pages, which often rely on Facebook ads for monetization, than Groups, which don’t.) By the same token, it highlights how political misinformation may flourish among people who were already heavily partisan than among the sort of swing voters who decide U.S. elections. A November study on the impact of Russian interference on Twitter found that limited encounters with foreign propaganda did not measurably change people’s political attitudes, perhaps in part for that reason. That echoed the findings of a 2017 analysis on the impact of fake news on Facebook users.

Facebook’s fact-checking efforts, however sincere, appear to be overmatched by the dynamics of its platform.

And yet, if Facebook can’t get a handle on even clear-cut examples of misinformation, it’s hard to be optimistic about its efforts at News Feed reform. In addition to the satire loophole, some of the debunked posts in Avaaz’s study appear to have gone unchecked on Facebook simply because Facebook’s official fact-checking partners weren’t the ones doing the debunking. For example, a story by the conservative Washington Free Beacon that accumulated more than half a million Facebook interactions alleged that Rep. Ilhan Omar was “holding secret fundraisers with Islamic groups tied to terror.” The story was rated false by the popular hoax-debunking site Snopes. But Snopes withdrew from Facebook’s fact-checking program in February, along with the Associated Press, amid reports that they were disillusioned by Facebook’s approach to the project. None of Facebook’s remaining partners appear to have fact-checked the Omar story, despite its inflammatory claim, which perpetuates the notion that U.S. Muslims are terrorist sympathizers.

As Nieman Lab’s Laura Hazard Owen pointed out in her sharp-eyed coverage of the study, Avaaz also included some examples of misinformation that were posted by politicians as status updates, as opposed to photos or article links. Those included two status updates by President Trump, one of which alleged that Democrats had written to Ukraine in 2018 urging its government to investigate him, and another claiming that Democrats don’t mind executing babies after birth. The latter was debunked by Politifact, a Facebook partner. But it didn’t matter because Facebook’s policy explicitly exempts politicians’ own speech, including their ads and status updates, from fact-checking. Had Trump shared a debunked article link or image macro, Facebook’s systems would have flagged those, but because the lies came in his own status update, Facebook allows them.

The Avaaz study, like several prior analyses, found that the majority of fake news items in its sample were pro-Trump or anti-Democrat (79%); but not all of them were. In fact, the single most viral fake news item in its report was a debunked story from December 2015 claiming that “Trump’s grandfather was a pimp and tax evader” and his father was a member of the Ku Klux Klan. Facebook does warn you when you try to share the story, which seems like exactly the kind of step you’d want the company to take. Yet that didn’t keep the article from racking up more than 1.6 million interactions, according to Avaaz’s CrowdTangle data.

The most glaring shortcoming in Facebook’s systems might also be the one that’s hardest to fix. Even when everything goes right with its fact-checking partners, their human editorial resources pale in comparison to the scope of misinformation on the platform, and they can only vet a fraction of it. When the U.K. nonprofit and Facebook fact-checking partner Full Fact published an in-depth report on the company’s fact-checking efforts this year, improving the response time for fact checks was one of its top recommendations. But paying a professional journalist to fact-check a questionable story will always require far more time and resources than it takes to share it on Facebook, or for Facebook’s software to put it in millions of users’ feeds. In most cases, a story only rises to the top of fact-checkers’ priority list once it has already gone viral. And it continues going viral during the fact-checking process. By the time it’s marked as debunked on Facebook, its reach may have already peaked.

For that reason, Avaaz’s Quran recommends that Facebook alert every user who has seen a piece of misinformation once it’s debunked. But that risks reinforcing the false story for some, especially if they hadn’t paid much attention the first time.

The discouraging reality is that Facebook’s fact-checking efforts, however sincere, appear to be overmatched by the dynamics of its platform. To make the News Feed a less misleading information source would require far more than belated debunkings and warning labels. It would require altering the basic structure of a network designed to rapidly disseminate the posts that generate the greatest quantity of quick-twitch reactions. It would require differentiating between more and less reliable information sources — something Facebook has attempted in only the most halfhearted ways, and upon which Zuckerberg recently indicated he has little appetite to expand.

In some ways, Facebook is ahead of other major social platforms when it comes to fact-checking user-posted content on a large scale. It now has some 55 fact-checking partners working in 45 languages, and it continues to develop new tools to detect posts that might contain misinformation. Yet the progress the platform has made appears to be reaching its limits under a CEO who sees his platform as a bulwark of free speech more than of human rights, democracy, or truth. Last week, Facebook’s only Dutch fact-checking partner quit the program in protest of the company’s refusal to fact-check politicians.

In BuzzFeed’s 2016 analysis, the virality of the top fake news stories increased dramatically as the election neared. Avaaz’s 2019 report showed the same growth trend — even with the 2020 presidential election still a year away. The sobering takeaway is that even if misinformation appears to be less prevalent today than it was three years ago, it still has 11 more months to get much, much worse.