On YouTube, the refrain used to go, everyone is six degrees of separation from Alex Jones. The video-sharing service banned the Infowars founder last summer, but conspiracy theories continue to thrive on the platform — and YouTube’s recommendation algorithm has historically pushed users toward them. A recent commitment to alter that feature, coupled with the company’s removal in late February of advertisements from anti-vaccination videos, suggests things might be changing.
YouTube has long been a haven for misinformation-mongers. The algorithm the site uses to direct viewers automatically from one video to another has privileged content that keeps people watching, and that has meant giving them increasingly extreme versions of stories the site thinks they like. A viewer can start with a video about the Green New Deal and be immediately directed to a video denying climate change exists and then to a video declaring the Earth is flat.
These online nudges can have offline consequences. The Pizzagate conspiracy theory led a man to bring a gun to a restaurant full of families. Those anti-vaccination videos make outbreaks of diseases like measles more likely. Young people are especially vulnerable to what appears on YouTube: Eighty-five percent of teenagers told the Pew Research Center they use the site, compared with 51 percent who said they use Facebook.
YouTube’s commitment to openness is valuable, but the platform’s power also comes with a responsibility to avoid real-world harm. YouTube already removes illegal content and content that violates its terms of service. The trouble is, when it comes to misinformation, harm can be hard to define. Many conspiracy theories are not obviously harmful until the moment someone gets hurt. That is more likely to happen when a fringe theory gains popularity and spirals into a mass obsession.
YouTube’s task, then, is to prevent that spiral. The platform is already elevating credible news sources in its search results and adding accurate information to sensitive queries. YouTube can also continue to adjust its recommendation algorithm to give lower priority to extreme content generally, as it says it has over the past few years. The company also said in January that it would take more aggressive action against misinformation it has reason to believe could be harmful by demoting specific videos flagged as false. YouTube already takes a similar approach to stripping ads from videos: The demonetization of anti-vaccination propaganda this week wasn’t new policy, but it was responsive enforcement. YouTube’s decision to disable comments on almost all channels featuring minors in response to an epidemic of predatory replies is also encouraging.
Whether YouTube’s promises lead to further progress remains to be seen. Some investigations show viewers are still being directed to the same old lies, but at least one other review identifies a dramatic drop in recommendations of alt-right videos. The content moderation conversation so far has focused mostly on the content that companies decide has no place on their platforms. But how those companies treat the content that remains matters, too.