The study examined 225 pieces of content that independent fact checkers had rated as false or misleading between January and March. The Oxford researchers found that 59 percent of it remained on Twitter, 27 percent remained on YouTube and 24 percent remained on Facebook.
“It’s surprising that so many of the things that have been proven to be false are still living on social media,” said co-author Philip N. Howard, director of the Oxford Internet Institute, which conducted the study in concert with the university’s Reuters Institute for the Study of Journalism and the Oxford Martin School.
Researchers also found that the most common subject of coronavirus misinformation concerned false claims about the actions of government or other international authorities, such as the United Nations or World Health Organization.
The most powerful spreaders of misinformation were politicians, celebrities or other public figures, who were the source of about 20 percent of false claims but generated 69 percent of the total “engagement,” a term that measures the reach of misinformation on social media. The report cited President Trump and Brazilian President Jair Bolsonaro as politicians who have made documented false statements about the pandemic, and all three platforms studied in the Oxford report in March removed some misinformation from Bolsonaro that violated their policies against harmful content.
Independent fact checkers have increased their focus on false claims about the coronavirus as the pandemic has grown in recent months, with checks on the subject rising more than 900 percent between January and March. The largest category among those items studied — drawn from a list of fact checks maintained by First Draft, a nonprofit group that combats misinformation and disinformation — were partially true information that had been twisted or manipulated to make it misleading. Only 38 percent of the items studied were completely fabricated, the Oxford researchers found.
Twitter said it created a policy against misinformation related to the coronavirus pandemic on March 18, which could explain the uneven results from a study whose data set began in January.
“We’re prioritizing the removal of content when it has a call to action that could potentially cause harm," said Twitter spokeswoman Katie Rosborough. "We will not take enforcement action on every Tweet that contains incomplete or disputed information about COVID-19. Since introducing these new policies on March 18, we’ve removed more than 1,100 Tweets and challenged 1.5 million potentially spammy accounts targeting COVID-19 discussions.”
YouTube spokesman Farshad Shadloo said in a statement, “We have clear policies against COVID misinformation and we quickly remove videos violating these policies when flagged to us.”
Facebook spokesman Andy Stone said, "Since the World Health Organization declared COVID-19 a global public health emergency, we’ve been taking aggressive steps to stop misinformation and harmful content from spreading, including by making additional investments to our program of over 60 fact-checking partners around the world who are debunking false claims in over 50 languages.”