Vaccines don’t kill, but insisting otherwise can. Facebook, Google and Twitter know that — which is why, as measles outbreaks send children to intensive-care units across the country, they have all decided to do something about it.
“Do something!” is exactly what people around the world have been saying to social media sites that, until recently, refused to accept responsibility for what happened on their platforms. That attitude is changing, but what “something” means is still up in the air. Do what, exactly, to whom? And will it help?
The most prominent platforms already ban hate speech and incitements to violence, at least in theory. They remain more reluctant, however, to remove or limit falsehoods. They’re not the arbiters of truth, they say, and they have always looked to free speech as a lodestar. Policies differ from platform to platform. But firms should take aggressive action when there’s a high likelihood of real-world harm.
It’s not a perfect metric. Neither is anything else. Even amid the messiness, though, agitating against life-saving inoculations falls cleanly on the wrong side of the line. Health officials studying the resurgence of a disease that was supposed to have been eliminated in this country almost two decades ago have made it clear: The outbreak of misinformation online is facilitating literal outbreaks of disease.
Companies apparently agree — to a point. Platforms could remove all anti-vax material, but so far they won’t, citing either squeamishness about policing belief or a desire not to stop parents from having conversations about so personal a decision. They could remove people, pages or groups that systematically promote anti-vax material, but they won’t do that, either. That leaves seeking to limit the reach of false messages on their platforms without banning them altogether.
Facebook has announced it is down-ranking anti-vaxxer groups and pages in users’ news feeds and in searches, as well as cutting them out entirely from recommendations and predictions and getting rid of their advertisements. Its sister company Instagram has blocked hashtags such as #vaccinescauseautism or #vaccineskill.
YouTube, which is owned by Google, has stopped anti-vaccination channels from running ads, and says hoaxes will appear less often in its “up next” module. When viewers do watch those videos, they’ll also see “information panels” with corrective context. Twitter has created a tool that pulls up a handy link to a government website offering facts about vaccination for anyone who searches for the subject, and it won’t auto-suggest terms that tend to lure people toward the inaccurate.
The remedies that focus on searches seek to fill what’s known as a “data void” — a sort of digital black hole that sucks curious consumers into the realm of the factless. If you search “did the Holocaust happen,” for example, you may be more likely to find sources that say it did not, because they’re the ones bothering to weigh in on what everyone else feels is settled fact. Vaccines, similarly, are settled science.
But these approaches can fall short. Search “#vaccines” on Instagram, leaving the whole “cause autism” or “kill” thing out, and the first accounts to show up are conspiratorial, with names such as “vaccines_uncovered” and “vaccines_revealed.” Believe it or not, they aren’t dedicated to touting the benefits of polio shots. Results on Facebook land users in similarly treacherous territory: “The Truth About Vaccines Docu-Series,” for example, is followed closely by “Tongue Ties, Autism, MTHFR, Vaccine, Leaky Gut — What’s the connection?” None, really, but these pages will tell you otherwise.
You can’t fill a data void with more emptiness, so approaches that don’t also surface enough authoritative sources to replace the junk have a fatal flaw. Twitter’s pop-ups help solve that half of the problem by linking to a government site, but the platform leaves alone the anti-vax content that appears right below. YouTube’s model, which seems to prioritize mostly verified videos from channels such as the Mayo Clinic and the Centers for Disease Control and Prevention, does a better job.
Even if platforms try to push down trustworthy sources and prop up reliable ones, algorithms miss things. They’re even more likely to miss when their targets dodge. The anti-vaccine community, which prefers the term “vaccine hesitant,” is no stranger to language games. Shifting rhetoric to talk about “doubts” or the need for parents to “decide” for themselves can skirt automatic filters. Hoaxers can also avoid policies about what counts as a lie, leaving the humans who set the rules flummoxed over where they should draw their lines.
Maybe some combination of strategies, over time, will spare some children the misery of measles or tetanus or whooping cough. Or maybe platforms will eventually have to supplement all that reach-limiting with some speech-limiting, too, at least for the most dangerous actors, many of whom prey on vulnerable communities. Maybe the answer is more fundamental, and sites will have to alter the incentives, from engagement algorithms to likes to follower counts, that reward extremism and sensationalism. Maybe, and it’s likely, they will have to do all these things at once.
The Internet didn’t create vaccine denialism, just as it didn’t create the other maladies platforms are now being asked to moderate away. It did, however, help the hoax go viral. The Web was meant to empower everyone, and now those who oversee it are trying to — have to — take some of that power away. Doing something isn’t as easy as it sounds, but controlling this outbreak can at least offer lessons about how to handle the next one.