The Washington PostDemocracy Dies in Darkness

Facebook and YouTube spent a year fighting covid misinformation. It’s still spreading.

The social media giants have struggled to find and take down anti-vaccine propaganda. But medical misinformation has thrived on their platforms for years.

U.S. Surgeon General Vivek Murthy talks to reporters during the daily news conference in the Brady Press Briefing Room at the White House on July 15, 2021 in Washington, DC. Murthy announced the publication of a Surgeon's General's advisory titled, 'Confronting Health Misinformation,' and called on social media companies to do more to combat false information about the coronavirus vaccine and other health care topics. (Chip Somodevilla/Getty Images)
8 min

Facebook, YouTube and Twitter all banned harmful covid-related misinformation as the pandemic took hold throughout the world. But the false claims are still proliferating.

On YouTube, the accounts of six out of 12 anti-vaccine activists identified by the Center for Countering Digital Hate as being responsible for creating more than half the anti-vaccine content shared on social media are easily searchable and still posting videos. On Facebook, researchers at the left-leaning advocacy group Avaaz ran an experiment in June in an effort to show how anti-vaccine material gets pushed to people. Two new accounts it set up were recommended 109 pages containing anti-vaccine information in just two days.

Vaccine rates in the United States have stalled and some cities are reinstituting mask recommendations as coronavirus cases rise again. Surgeon General Vivek H. Murthy issued a warning last week that vaccine misinformation spreading online was partly responsible for Americans refusing the vaccines, leading to avoidable deaths and illness. President Biden, too, has laid part of the blame on social media companies.

Biden clarifies comments about Facebook ‘killing people’ with vaccine misinformation

Researchers agree that social media is playing a role. At the heart of the problem are the companies’ content-recommendation algorithms, which are still generally designed to boost content that engages the most people, regardless of what it is — even conspiracy theories.

“For a long time the companies tolerated that because they were like, ‘Who cares if the Earth is flat, who cares if you believe in chemtrails?’ It seemed harmless,” said Hany Farid, a misinformation researcher and professor at the University of California at Berkeley, referring to a long-standing conspiracy theory about airplane condensation trails. “The problem with these conspiracy theories that maybe seemed goofy and harmless is they have led to a general mistrust of governments, institutions, scientists and media, and that has set the stage of what we are seeing now.”

On Thursday, Sen. Amy Klobuchar (D-Minn) proposed a bill that would take away liability protections enjoyed by tech companies when it comes to health misinformation on their platforms. The bill is one in a long line of attempts to reform Section 230, the two-decade old law that protects Internet companies from being sued for content posted on their websites. Even if it passes though, it’s unlikely to survive a first amendment challenge, said Jeff Kosseff, a cybersecurity law professor at the United States Naval Academy.

The tech companies say they have made big inroads against misinformation about coronavirus vaccines on their platforms. Facebook says it has taken down more than 18 million pieces of covid misinformation since early last year. Twitter adds labels to tweets that might contain misleading information about coronavirus vaccines and has a three-strike system where it bans accounts that repeatedly post vaccine misinformation. Just this week, it temporarily suspended Rep. Marjorie Taylor Greene (R-Ga.) for violating its covid-19 misinformation policy.

“We’ll continue to take enforcement action on content that violates our covid-19 misleading information policy and improve and expand our efforts to elevate credible, reliable health information,” said Twitter spokesperson Trenton Kennedy.

YouTube has taken down more than 900,000 videos that broke its rules on covid misinformation since the start of the pandemic, said spokesperson Elena Hernandez. The channels of the anti-vaccine activists who are responsible for the majority of anti-vaccine content still available on YouTube have had videos taken down and received strikes that could lead to total bans, Hernandez said.

The coronavirus vaccine misinformation problem stems from the earliest days of social media, when the first cat videos and baby photos were posted. Mathematical equations known as algorithms guess a person’s interests by tracking what they watch and click on, then feed more similar content based on the results. The overarching goal is to capture people’s attention so that the platform can show them highly targeted and lucrative ads, so the algorithm rewards content that gets people to stay longer and interact more.

Facebook spokesperson Kevin McAlister said the company’s algorithms are also designed to remove and reduce misinformation on the site.

For the most part, as long as content wasn’t breaking rules against copyright infringement, nudity and bullying, it was allowed. But in recent years as social platforms have started to play a more central role in politics and society around the world, critics and regulators have pushed the companies to clamp down on more kinds of content, such as outright racist or sexist posts that have the potential to incite real-world violence.

Facebook and YouTube’s vaccine misinformation problem is simpler than it seems

The platforms also allowed a sea of conspiracy theories and vaccine-skeptical content to flourish for years on their sites. Anti-vaccine influencers were popular on social media long before the coronavirus, and the platforms largely turned a blind eye until 2019.

During that year, a number of measles outbreaks and public pressure triggered companies to begin removing anti-vaccine content. A Senate panel issued a dire warning about the real-life public health danger of misinformation, and the companies began cracking down.

Vaccines even before the coronavirus pandemic had become political, especially as false information tied the shots to autism. The Centers for Disease Control and Prevention confirms that vaccines do not cause autism.

Facebook, Twitter and YouTube have relied on the concept of “real-world harm” to inform their more restrictive policies. As public pressure about social media’s influence over real-life political and health events increased, the companies began training their computer and human moderators to catch some of the problematic information — and they faced a test like no other when the pandemic hit.

In March 2020, Google’s chief executive Sundar Pichai sent a note to employees rallying them to meet the occasion.

Why the coronavirus pandemic is a test Facebook can’t afford to fail

Millions of people were turning to the search giant with questions about how they could protect themselves from the virus, and Pichai said it was Google’s job to get them the most accurate information. Search results were topped with links to the CDC and the World Health Organization. Tens of thousands of ads attempting to take advantage of the panic were blocked, and Pichai promised that Google-owned YouTube would take down any videos with pseudoscience around the coronavirus.

Facebook and Twitter made similar commitments. For companies that generally resisted calls to ban content from their platforms, drawing a red line that coronavirus misinformation would not be tolerated was a rare move that experts and health officials applauded.

Amid action to stop the spread of health misinformation, Facebook and Twitter also faced intense criticism from President Donald Trump and his allies last year as the companies began labeling and removing his posts that violated policies on misinformation and inciting violence. Twitter deleted a post Trump retweeted that attempted to discount the covid-19 death toll, for example. All three social media companies eventually banned his accounts.

The misinformation spread early in the pandemic also included unproven claims — also repeated by the former president — that hydroxychloroquine could cure the coronavirus.

In May 2020, a 26-minute video presented as a trailer for an upcoming documentary called “Plandemic” featured a prominent anti-vaccine activist falsely claiming that billionaires were helping to spread the virus to increase use of vaccines. The slickly produced piece looked like a professional documentary and spread quickly across the world. By the time Facebook took down the original video, it had already been viewed 1.8 million times, and new versions of it kept cropping up. YouTube took down copies of the video, too, though one version hit 7.1 million views before it was removed, according to the Verge.

The platforms are deeply interconnected. An anti-vaccine YouTube video may not be recommended by the company’s algorithm, but if it is shared in a Facebook group or tweeted by an activist, it could still get a lot of views. With “Plandemic,” versions of the video kept appearing even after Facebook and YouTube took down the originals, turning enforcement into a game of whack-a-mole.

One conspiracy theory that has been circulating online for months is that coronavirus vaccines will be used to embed microchips in the arms of every American. A YouGov poll released July 15 showed that 20 percent of U.S. adults believed the chip falsehood was probably or definitely true.

Videos popped up this spring of people purporting to stick magnets to their arms where they had received a shot. The CDC refuted the myth, saying the vaccines “do not contain ingredients that can produce an electromagnetic field at the site of your injection.”

Anti-vaccine groups have also evolved to dodge detection, using code words to shield themselves from moderation algorithms. NBC reported that one anti-vaccination group with more than 40,000 followers on Facebook called itself “Dance Party.” Members referred to getting the vaccine as “dancing” and used “pizza” to refer to “Pfizer.”

Today, anti-vaccine content is still common on social platforms. The Avaaz research, which hasn’t been made public until now, tested how Facebook’s algorithm reacted to accounts that began interacting with vaccine information.

One account started by liking a page with known vaccine misinformation. The other started by searching “vaccine,” which turned up results including pages with anti-vaccine content. One page about side effects encouraged people to discuss injury after receiving vaccines. Another said “vaccines harm.” Still another urged people to consider “medical freedom,” which has been a rallying cry for anti-vaccine communities. One was called “Autistic by Injection,” a correlation that has been debunked.

“Opening and liking several of these pages, in turn, led our account further into a network of harmful pages seemingly linked together and boosted by Facebook’s recommendation algorithm,” Avaaz researchers wrote in their report.

Facebook’s McAlister said the company has stopped recommending some of the pages cited in the report. “Vaccine hesitancy among people who use Facebook in the U.S. has declined by 50% and people are becoming more accepting of vaccines every day,” he said in a statement.

Facebook isn't sharing how many Americans viewed vaccine misinformation

In a separate study published Tuesday, researchers at the left-leaning group Media Matters for America identified 284 public and private anti-vaccine Facebook groups, with more than 520,000 followers combined.

“It really speaks to the scale of the problem,” said Jevin West, director of the Center for an Informed Public at the University of Washington. “If it was a minor problem, then all the interventions, fact-checkers, would appear to have a larger relative effect.”

Facebook only recently blocked the hashtag #VaccinesKill, CNN reported and The Washington Post confirmed on Wednesday.

Anti-vaccine messages spread on social media have also influenced movements in real life, including in Southern California, where activist groups have bloomed. At one point, an anti-vaccine protest temporarily shut down a vaccine center at Dodger Stadium.

Online activism is spilling into the streets of Southern California, sparking a post-Trump movement

Deciding what to remove isn’t a cut-and-dried issue, the social media companies say.

They say vaccine hesitancy, or delaying or refusing a vaccine, can be a tricky area to police because much of the online content could be people expressing concern as opposed to purposely spreading false information.

“Vaccine conversations are nuanced, so content can’t always be clearly divided into helpful and harmful. It’s hard to draw the line on posts that contain people’s personal experiences with vaccines,” Kang-Xing Jin, Facebook’s head of health, said in an op-ed in the San Francisco Chronicle in March.

The platforms have also done a lot to encourage their users to read information on the coronavirus and vaccines produced by trusted doctors and health-care institutions. Facebook, YouTube and Twitter all amend links to CDC information on health-related posts. On Monday, YouTube said it would promote videos vetted under guidelines created by the National Academy of Medicine to the top of health-related search results.

But adding more good information doesn’t erase the bad information that persists on social media, especially when some of the misinformation leads to people refusing a vaccine that could save their lives.

Massive Facebook study on users’ doubt in vaccines finds a small group appears to play a big role in pushing the skepticism

Farid, the misinformation researcher from Berkeley, likened the companies’ position to an airplane manufacturer’s CEO countering criticism after a plane crash by saying that the vast majority of passengers are delivered safely to their destinations.

“Nobody would think that’s a reasonable response to two planes crashing out of the sky and 200 people dying,” Farid said.


A previous version of this story misstated the process for vetting YouTube videos. The National Academy of Medicine provides guidelines YouTube will use to vet videos, but does not do the vetting. This story has been corrected.