Those social networks have long struggled with the misinformation runoff from bigger rivals, like Facebook and Twitter, who have worked to stifle the spread of election disinformation on their sites. Those tech giants have spent months preparing for this period, marshaling tens of thousands of content moderators to slap labels on posts, hide tweets and even shutting off political ads.
The more misinformation circulates on the large social networks, the more it trickles down to the smaller sites better known for posting wedding photos, connecting with potential employers and complaining about a neighbor’s dog.
“Of course, the Internet is a space without borders, and that means the conspiracy theories and propaganda and misinformation does not remain static across platforms,” said Samuel Woolley, a professor and director of a propaganda research team at the University of Texas at Austin.
It’s not the first time. In the lead-up to the 2016 election, Pinterest became a repository for thousands of political posts created by Russian operatives. Pinterest was one of the first social media companies to decisively crack down on anti-vaccine disinformation in 2019, and it expanded policies to prohibit voting misinformation this year. LinkedIn last year started releasing reports of its content moderation practices, where it detailed that it was removing fake accounts, spam job postings, harassment and even child exploitation.
In the past week, misinformation and conspiracy theories have surged across the board online — something largely expected by the research community and the tech companies. President Trump took to Twitter to falsely claim victory, and he and his team claimed the Democrats were trying to “steal” the election. Fake posts about voter fraud in Arizona and Georgia also took off. Major media networks declared Joe Biden had won the election on Saturday despite Trump’s repeated denials.
The smaller platforms have fewer people to respond to the fake claims, but in some instances they are taking much more expansive steps to stop falsehoods about the election from gaining traction on their platforms.
In Pinterest’s case, the company removed posts about stolen ballots that violated its policies. It is now preventing searches of controversial topics that are more likely to turn up election misinformation, like “stolen ballots.” The company also cracked down on phrases that are gaining traction about conspiracy theories, such as “Sharpiegate,” a false claim that votes were eliminated in Arizona because people used Sharpie pens to mark them.
“One of the ways they’re cutting the corner on the fact that they have less resources for doing the moderation is they’re just straight up blocking some of these terms and hashtags, which works better,” said Alex Stamos, the former chief security officer of Facebook and the director of the Stanford Internet Observatory. He’s part of a coalition of researchers studying electoral integrity, which flagged the problems to Pinterest. “It’s much easier to just block Sharpiegate than to try to wade through misinformation and what isn’t. You can still enjoy your handmade tea cozy pictures without having your Sharpiegate content.”
Pinterest says its teams are working to proactively review and remove content that apparently intends to legitimize election results on the basis of false and misleading claims.
“Our goal is to inspire Pinterest users with useful and relevant ideas, and we believe misinformation takes away from this experience,” company spokeswoman Charlotte Fuller said.
Pinterest’s fight is indicative of the “long tail” of misinformation that spreads from major sites such as Facebook and Twitter to smaller social media sites and blogs, Woolley said. Even though the smaller sites have fewer users, they often have even more tightly knit communities that trust what each other share.
“Just because they don’t have the same ability to go viral, that doesn’t mean they can’t have some kind of impact,” Woolley said.
Microsoft-owned LinkedIn had been preparing for the election and is employing many of the same content moderation tactics it used to respond to misinformation about the coronavirus. The company also created an election news module where people can find news from authoritative media sources about the election results, which is being updated by a team of 75 editors.
“We’re closely monitoring for and taking action on content that violates our policies, including unfounded claims of election fraud,” LinkedIn spokesman Greg Snapper said. “We also continue monitoring for and taking action on any premature claims of victory, unless the content includes context that the claim is being disputed or is premature.”
Still, one post asserted that there was systematic voter fraud in Pennsylvania, despite a lack of evidence. In another post, a user linked to a video from One America News that falsely claimed Trump won the election and Democrats were trying to steal it. LinkedIn has removed these posts.
One tactic that may be working for Nextdoor: vigilant volunteer moderators who police their own groups.
Some users reported seeing claims of voter fraud or other election gripes pop up but quickly get taken down.
“It’s crazy that they’re able to remove these conversations about the election so quickly,” said Jenn Takahashi, who manages the Twitter account Best of Nextdoor. The account collects Nextdoor screenshots from users across the nation.
Takahashi asked followers about election misinformation and found that many people had seen a post but checked back later to find it gone. But others shared screenshots of posts alleging voter fraud that remained online, including one promoting a false claim that Wisconsin had more votes than registered voters.
Nextdoor did not respond to a request for comment.
Debunked claims of voter fraud, including the one about Sharpies, continued to spread on meme site iFunny, according to disinformation researchers. One meme showing a fake story about Canadian military intervention racked up 1,600 likes. iFunny removed that post and several others after The Washington Post inquired about them.
iFunny’s policies allow political satire and opinion, and prohibit calls for others to support a particular candidate. “We are product-focused on humor, not on breaking news or politics,” Denis Litvinov, chief information officer of parent company FunCorp said in a statement. It increased moderating resources, but hasn’t seen anything “extremely unusual” for such a big event, he said.
These companies also have more flexibility because they do not see political speech and news as core to their product offering in the same way as websites like Facebook and Twitter, misinformation experts said. And in some ways, the smaller social media sites have an easier time enforcing policies because they are not thrust into the political spotlight as often.
“People don’t direct as much ire at a company like Pinterest when they make very firm decisions to block, say, anti-vaccine content as they would at Facebook,” Woolley said.