Instagram has begun using image recognition and other tools to identify posts and stories that may contain misinformation and send them to Facebook’s fleet of fact-checking partners for review. If they’re determined to be false, Instagram will not recommend the posts to new users in the Explore tab or hashtag pages, as first reported by Poynter.
Paris Martineau covers platforms, online influence, and social media manipulation for WIRED.
But the Facebook-owned image-sharing network won’t remove the misleading posts; nor will it demote them in users’ main feeds, leaving millions of people vulnerable to misinformation.
Since 2016, Facebook has referred questionable posts to a team of over 50 news organizations from around the globe. Any item determined to be false is labeled and demoted in News Feed; anyone who tries to share a false post is warned against doing so. The same fact-checkers will review Instagram posts, but those found to be false will not be labeled as such, nor will they be demoted in users’ feeds or in the Instagram Stories carousel.
An Instagram spokesperson said the company is focused on making it harder for new users to be algorithmically exposed to misinformation, rather than stemming the reach of misinformation. The spokesperson said Facebook’s and Instagram’s approaches to misleading posts differ in part because Instagram lacks a reshare button, and the content users see in their Instagram feeds comes solely from accounts they have chosen to follow, unlike on Facebook.
Instagram is a hotbed for disinformation and inflammatory content designed to exacerbate tensions among different demographic groups. In 2017, the platform was the go-to social network for the Russian propaganda arm known as the Internet Research Agency, which used more than 130 fake Instagram accounts to spread polluted information and reinforce cultural divisions among Americans. The top 10 IRA accounts collectively garnered more than 120 million engagements on Instagram and had hundreds of thousands of followers, according to a report commissioned by the Senate Intelligence Committee.
Instagram has also proved to be an effective and popular tool among extremists and conspiracy theorists. After Facebook, YouTube, and others banned conspiracy theorist Alex Jones and his media organization Infowars last August, Jones and Infowars took to Instagram and used the site to cultivate a massive audience until they were finally banned from the platform—along with several other extremists—last week.
Extremist accounts and other fake-news mongers certainly owe some of their popularity to the algorithms of Instagram’s recommendation pages like the Explore tab or hashtags. But their massive followings also come from word of mouth and channels like YouTube, Twitter, and Facebook. Crimping their ability to easily reach new audiences doesn’t diminish their influence on those already trapped in these misinformation-ridden ecosystems.
Instagram uses image-recognition technology to identify posts that could potentially contain misinformation. A spokesperson says the company looks for signals like heavy use of text overlay on images or the use of certain words in the comments of images to help discern what may warrant a fact-check. Additionally, when a particular post or image on Facebook is identified as misinformation by the company’s fact-checkers, Instagram uses image-recognition technology to identify similar posts on Instagram and reduce their reach through the Explore feed and other recommendation areas, the spokesperson says.
For now, Instagram says the company’s fact-checking partners are receiving only a limited number of posts to review, though their decisions affect the type of content surfaced to all Instagram users. The total will remain limited in coming weeks as the company monitors the effectiveness of the fact-checking program.
Accounts whose posts are repeatedly flagged as misinformation won’t be penalized beyond the reduced reach through Explore and other recommendation sites. The spokesperson says that Instagram is “looking at” potentially creating a system akin to Facebook’s that would allow posts that have been debunked by fact-checkers to include a label denoting that fact, but did not offer any timeline for a decision on such a move.
When asked why it took so long for the company to develop any sort of program dedicated to slowing the spread of misinformation, the spokesperson pointed to the peculiarities of Instagram as a platform—including the fact that most images posted there are cropped, filtered, or have text added to them—which they said make it more difficult to detect certain types of problematic content using automation and image-recognition technology.
Instagram is working on a new tool to counter vaccine-related misinformation, the spokesperson says. If a user clicks on an anti-vax hashtag or content related to vaccine misinformation, Instagram will display a pop-up message showing educational materials about vaccination. Exact details on the design of the pop-up and when it will roll out are still in the works, the spokesperson says, but it appears to be modeled on similar pop-ups that appear for user searches related to opioid use and self-harm, which Instagram says have been successful.
Instagram also plans to change its policies regarding account removal, as first reported by Engadget. Currently, users are banned for violating Instagram’s guidelines only if "a certain percentage" of the posts they have made within an undisclosed window of time are found to go against the company’s policies. Instagram will soon switch to a more straightforward strikes-based system, with the number of violations required for removal staying consistent among all users, the spokesperson confirmed.