For months, activists have urged tech companies to fight the spread of falsehoods purporting that the 2020 presidential election was stolen — warning that such disinformation could delegitimize the 2022 midterms, in which all seats in the House of Representatives and more than a third of the Senate is up for grabs.
Yet social media giants are pushing forward with a familiar playbook to police misinformation this electoral cycle, even as false claims that the last presidential election was fraudulent continue to plague their platforms.
Facebook is again opting not to remove some election fraud claims and may instead use labels to redirect users to accurate information about the election. Twitter says it will apply misinformation labels or remove posts that undermine confidence in the electoral process such as unverified election-rigging claims about the 2020 race that violate its rules. (The company didn’t specify when it would remove offending tweets but said labeling reduces its visibility.)
This stands in contrast to platforms, such as YouTube and TikTok, which are banning and removing 2020 election-rigging claims, according to recently released election plans.
Misinformation experts warn that the strictness of the companies’ policies and how well they enforce their rules could make the difference between a peaceful transfer of power and an electoral crisis.
“The ‘big lie’ has become embedded in our political discourse, and it’s become a talking point for election-deniers to preemptively declare that the midterm elections are going to be stolen or filled with voter fraud,” said Yosef Getachew, a media and democracy program director at the liberal-leaning government watchdog Common Cause. “What we’ve seen is that Facebook and Twitter aren’t really doing the best job or any job in terms of removing and combating disinformation that’s around the ‘big lie.’ ”
The political stakes of these content moderation decisions are high and the most effective path forward isn’t obvious, especially as companies balance their desire to support free expression with their interest in preventing offensive content on their networks from endangering people or the democratic process.
In 41 states that have held nominating contests this year, more than half the GOP winners so far — about 250 candidates in 469 contests — have embraced former president Donald Trump’s false claims about his defeat two years ago, according to a recent Washington Post analysis. In 2020 battleground states, candidates who deny the legitimacy of that election have claimed nearly two-thirds of GOP nominations for state and federal offices with authority over elections, according to the analysis.
And those candidates are turning to social media to spread their election-related lies. According to a recent report by Advance Democracy, a nonprofit organization that studies misinformation, Trump-endorsed candidates and those connected with the QAnon conspiracy theory have posted election fraud claims hundreds of times on Facebook and Twitter, drawing hundreds of thousands of interactions and retweets.
Those findings follow months of revelations about social media companies’ role in facilitating the ‘stop the steal’ movement that led up to the Jan. 6 siege of the U.S. Capitol. An investigation from The Washington Post and ProPublica earlier this year found that Facebook was hit with a barrage of posts — at a rate of 10,000 a day — attacking the legitimacy of Joe Biden’s victory between Election Day and the Jan. 6 riot. Facebook groups, in particular, became incubators for Trump’s baseless claims of election rigging before his supporters stormed the Capitol, demanding that he get a second term.
“Candidates not conceding isn’t necessarily new,” said Katie Harbath, a former public policy director at Facebook and technology policy consultant. “It … has a heightened risk [now] because it comes with a [higher] threat of violence” though it’s unclear whether that risk is the same this year as it was during the 2020 race when Trump was on the ballot.
Facebook spokesman Corey Chambliss confirmed that the company won’t outright remove posts from everyday users nor candidates that claim there is widespread voter fraud, that the 2020 election was rigged or that the upcoming 2022 midterms are fraudulent. Facebook, which last year renamed itself Meta, bans content that violates its rules against inciting violence including threats of violence against election officials.
Social media companies such as Facebook have long preferred to take a hands-off approach to dicey political content to avoid having to make tough calls about which posts are true.
And while the platforms have often been willing to ban posts that seek to confuse voters about the electoral process, their decisions to take action on subtler forms of voter suppression — especially from politicians — has often been politically fraught.
They often faced criticism from civil rights groups for not adopting policies against subtler messages designed to sow doubt in the electoral process, such as claims that it’s not worth it for Black people to vote or voting isn’t worth the trouble because of long lines.
During the run up to the 2020 election, civil rights groups pressured Facebook to expand its voter suppression policy to address some of those indirect attempts to manipulate the vote and to apply their rules to Trump’s commentary more aggressively. For instance, some groups argued that Trump’s repeated posts questioning of the legitimacy of mail-in ballots could discourage vulnerable populations from participating in the election.
But when Twitter and Facebook attached labels to some of Trump’s posts, they faced criticism from conservatives that their policies discriminated against right-leaning politicians.
Those decisions are further complicated by the fact that it isn’t completely clear whether labels are effective at fighting users’ perceptions, according to experts. Alerts that posts could be misleading might prompt questions about the veracity of the content, or could have a backlash effect for people who already believe those conspiracies, according to Joshua Tucker, a professor at New York University.
A user might look at a label and think, “'Oh, I should [question] this information,'” said Tucker. Or a user might see a warning label “and say ‘Oh this is yet further evidence that Facebook is biased against conservatives.’”
And even if labels work on one platform, they may not work on another one, or they may funnel people who are annoyed by them to platforms with more-permissive content moderation standards.
Facebook said users complained that its election-related labels were overused, according to a post from Global Affairs President Nick Clegg, and that the company is mulling using a more tailored strategy this cycle. Twitter, conversely, said it saw positive results last year when it tested newly-designed misinformation labels on debunked content that redirected people to accurate information, according to a blog post.
Still, the specific policies that social media giants adopt may be less important than the resources they deploy to actually catch and address rule-breaking posts, according to experts.
“There’s so many unanswered questions of the effectiveness of the enforcement of these policies,” said Harbath. “How is it actually all going to work in practice?”