Republican congressional candidate Joe Kent recently claimed “rampant voter fraud” in the 2020 election in an ad on Facebook — a misinformation problem Facebook has tried to correct.
It’s one example of the type of misinformation already testing Facebook in the midterm elections, according to researchers, civil rights advocates and some former employees, who are calling on Facebook to ramp up its policies to prevent the spread of election-related misinformation. The primaries are already well underway, and at least one candidate on Wednesday was being urged by Trump to declare victory before the results were in.
Facebook, like many social media platforms, constantly needs to shift and update policies as it learns how its platform has been misused — taking steps to remedy problems for the next election. For instance, Facebook ramped up its programs to address foreign interference after the 2016 election, when Russian operatives were found to have meddled with the presidential race.
Researchers expect misinformation spreading the “big lie,” purporting that the 2020 presidential election was stolen, as well as efforts to suppress voter turnout to affect this cycle. In particular, they fear that misinformation could erode Americans’ faith in the electoral process or even lead to violence or harassment against election officials.
Facebook has not yet released a new public policy strategy for the November midterms to refresh and update its rules and tools to protect the elections, something it traditionally touts. And former employees, some of whom spoke on the condition of anonymity to discuss sensitive matters, said they worry that the social media company is already lagging far behind where it needs to be to prevent the spread of misinformation from hurting voters’ understanding and behavior in the primaries and general election.
The midterms present a special challenge to Facebook and other social media giants thanks to the sheer scale of the number of campaigns, when all 435 seats in the House of Representatives and 35 of the 100 seats in the Senate are up for grabs. Facebook’s content moderation systems are more likely to struggle to catch rule-breaking posts that spread on the networks in hyperlocal environments than posts that are going viral across the nation, according to leaked internal company documents known as the Facebook Papers viewed by The Post. That’s because Facebook’s automated systems have a harder time catching that kind of content.
Facebook in particular is facing scrutiny following its role in previous elections, in part because it has a such a broad user base in the United States and has proved to be easily manipulated by those seeking to spread misinformation. Advocates worry that the platform could be used again to spread content that seeks to delegitimize primaries and general election results just like in the run-up to the Jan. 6, 2021, siege of the Capitol.
“It’s a little too late,” said Katie Harbath, a former Facebook public policy director and a fellow at the Bipartisan Policy Center think tank. “I wish they would have started sooner.”
The midterm primaries are already surfacing some of the issues. On Wednesday, Trump moved to baselessly discredit the too-close-to-call Republican Senate primary in Pennsylvania, urging his endorsed candidate, Mehmet Oz, to “declare victory” over his opponent before all the votes were counted.
Meanwhile, South Carolina congressional candidate Katie Arrington has been running an ad campaign on Facebook that claims Democrats and media organizations covered up “ballot harvesting” during the 2020 election, elevating debunked claims from a new documentary that alleges nonprofit organizations paid people to collect ballots and put them into drop boxes in various cities.
“The radical left got away with stealing the election,” Arrington, who was endorsed by Trump and is vying to represent the state’s first district, says in one ad.
Republican congressional candidates Kent and Arrington didn’t immediately respond to requests for comment.
Last week, more than 120 civil rights and advocacy groups pushed the CEOs of social media platforms including Facebook, Google’s YouTube, Twitter, TikTok and Snapchat to take more aggressive actions to curb election-related disinformation in the first national election day since the Jan. 6 insurrection. Twitter declined to comment while representatives for the other tech companies didn’t immediately respond to requests for comment.
“Last time around, the companies put in some slapdash measures that were a day late and a dollar short,” said Jessica González, co-CEO of the media advocacy organization Free Press. “I think we learned that they need to start instituting election-integrity measures long before they did last time.”
Facebook spokeswoman Dani Lever said in a statement that “no tech company does more to protect elections online.” Lever cited Facebook’s programs to hinder foreign governments seeking to influence elections outside their nations, as well as its work with third-party fact-checking organizations to catch and address misinformation on its social networks.
Facebook, whose company last year changed its name to Meta, has long been a crucial tool for congressional campaigns to reach voters because of its widespread popularity among people of all demographics. The platform also gives candidates the ability to target their advertisements to thin slices of the electorate in their local communities for a relatively cheap price.
But as Republicans continue to spread the unfounded claim that the 2020 presidential election was stolen, Facebook will be forced to make tough calls about which posts to label, take down or leave up. During the 2020 cycle, Facebook banned ads that claimed widespread voting fraud or claims that the U.S. election results were invalid, though it stopped short of banning posts making such allegations. It also banned claims that alleged lawful voting methods such as mail-in ballots were illegitimate, as well as misrepresentations about how to vote.
Facebook enforces its policies against voter-suppression content through a mix of human content moderation and artificial-intelligence-backed systems that scan Facebook’s networks for potential rule violations. Facebook also directs users to a portal with accurate information about how to vote. Lever also said Facebook plans to work with state and local election authorities to identify and remove misinformation about voting rules and conditions.
While there is no proof of widespread voter fraud in the 2020 presidential election, GOP leaders in key states across the country are continuing to question President Biden’s win and using the perception of fraud among voters to pass new voting restrictions.
One question facing Facebook for the midterms is whether to repeat a political advertisement blackout in the final days before the election. That was a new tactic introduced in 2020 aimed at preventing last-minute surprises during the campaign.
Facebook will also need to decide how to treat candidates who declare victory before mainstream media outlets do. In 2020, the company decided to put labels on posts in which candidates falsely claimed victory. That might be too complicated for the midterms, experts said.
A month before Super Tuesday in 2020, Facebook struggled to catch voter-suppression efforts, according to the Facebook Papers, a trove of internal documents from whistleblower Frances Haugen. In an 111-page report, Facebook analysts warned that its social networks could be used to discourage Americans from voting in the upcoming election.
The February 2020 document rated Meta’s policy “readiness” to handle traditional voter-suppression ads, such as messages that claim it costs money to vote, as “high.” But the document rated the company’s ability to detect that content as medium.
Facebook analysts had a dimmer view of the company’s ability to tackle subtler forms of voter suppression — what Facebook called “demobilizing content” messages such as, “Poll lines are 3 hours. It’s not worth it.” The analysts rated the company’s policies, detection and enforcement as low, according to the document.
“We haven’t solved the disinformation problem,” said Joshua Tucker, co-director of the NYU Center for Social Media and Politics. “We’re still going to face all the disinformation problems we faced in previous elections. And we’re still going to have this question of the extent to which platforms are favoring one side versus the other side.”
The coalition of civic advocacy groups is calling on Facebook and other social media platforms to go further this time around. They want the platforms to commit to increasing their staffing and content moderation practices in the period between Election Day and when the new members take office in 2023 to help “ensure a peaceful transition.”
The groups are also asking tech companies to prioritize removing posts that amplify the “big lie” that the 2020 election was stolen or glorify the Jan. 6 siege of the Capitol, “particularly from political candidates and in fundraising advertisements,” they wrote in their letter.
“They have been leaving up content around the 2020 election saying the election was stolen,” said Yosef Getachew, media and democracy program director at advocacy group Common Cause. “You have candidates saying we have had prior elections stolen, so this one will be stolen as well, so it’s an ongoing issue that we are trying to get them to take seriously.”
Elizabeth Dwoskin contributed to this report.