Some of the Facebook and Instagram ads that the House Intelligence Committee has linked to a Russian effort to disrupt the American political process during the 2016 presidential election. (Jon Elswick/AP)

When it comes to social media, government officials and the American public have grown wary of outside actors — and rightly so.

According to Attorney General William P. Barr, special counsel Robert S. Mueller III’s report found conclusive evidence of Russian disinformation and social media campaigns designed to “sow social discord, eventually with the aim of interfering” with the 2016 election.

But disinformation spread by foreign governments is not the only issue stoking worries.

Efforts to safeguard against election interference have ignited concerns over First Amendment protections and censorship and brought a new wave of criticism, often pitting citizens, top lawmakers and tech giants against one other.

Evaluating the responsibility of social networks such as Facebook and Twitter, and the human and algorithmic choices they make about the voices heard, and what they are able to say, will be a vital part of maintaining American confidence in the election system heading into 2020, says Jameel Jaffer, director of the Knight First Amendment Institute.

“They shape and distort public discourse through their decision-making,” Jaffer said. “But how do those decisions affect the integrity of the democratic process?”

President Trump has helped fuel a conservative movement that increasingly alleges it is being suppressed by social media platforms.

Tech companies maintain that they are neutral, insisting that they do not censor content on the basis of political affiliation. They do, however, review and restrict content deemed offensive, threatening or criminal — and, that’s entirely legal under the First Amendment.

The Constitution limits government action. Privately owned companies have a set of rights that cannot be significantly restricted. Congress further elected to shield social platforms from liability for statements made on the site. Under the Communications Decency Act, a site is more akin to a bulletin board than to a publisher who makes editorial decisions, though it reserves the right to remove content that violates its terms and conditions. (For host sites to exist in their current form, these protections are essential.)

But the law is beginning to reconsider whether social networking sites function as modern-day equivalents of public squares.

In 2017, the U.S. Supreme Court ruled that the Internet and social media platforms increasingly function as quasi-public forums. Justice Anthony M. Kennedy, delivering the opinion of the court, wrote: “While in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace — the ‘vast democratic forums of the Internet’ in general, and social media in particular.”

Rep. Devin Nunes (R-Calif.) filed a lawsuit last week claiming that Twitter, two parody Twitter accounts and a Republican political consultant violated the First Amendment and defamed him. He argued in court documents that Twitter, a private company, wields the power and qualities of a government entity and should be stripped of its constitutional protections.

“The ability to use Twitter is a vital part of modern citizenship. … That is because Twitter is not merely a website: it is the modern-day town square,” he said in the complaint, though he failed to address the wide-ranging (and possibly catastrophic) implications of viewing tech giants as extensions of the government.

So what does that mean for the companies, for trust in the platforms and for the law?

Legislative changes require Congress to agree on bipartisan policy designed to prevent foreign influence in American elections. Although proposed legislation — such as the Honest Ads Act and Disclosure Act — would put transparency in place, with a divided government, advancing either seems unlikely.

Still, in a post-2016-election universe, tech companies face increased pressure to police their networks. Disparaged with unprecedented criticism from both Democrats and Republicans, some companies have assumed the responsibility of internal policing, said Jaffer, of the Knight First Amendment Institute.

“They do just enough, or promise to do just enough, to take the wind out of the sails of regulatory proposals, yet come up shy of effecting the needed change,” he said. The challenge, then, becomes how to enforce an internal monitor with no external regulator.

Politicians across party lines will complain of bias when a platform makes a decision averse to them, even if the decision is correct. In 2020, the role of these networks will emerge as a bipartisan issue, not just a Republican problem led by Trump.

“We are flying blind,” said Tom Glaisyer, managing director of the Democracy Fund’s Public Square Program, “and are unable to understand the level of misinformation and the malevolent actors that exist.”

Given their scale and role in public discourse, platforms, he suggested, need to do more than ameliorate the problem. They have a “burden and responsibility.”

Glaisyer added, “They need to think deeply about how they operate in a manner that supports our democracy.”

Read more:

Facebook says it will now block white-nationalist, white-separatist posts

Facebook and Twitter testified before Congress. Conservative conspiracy theorists lurked behind them.

Justice Department warns tech companies as Facebook and Twitter defend themselves in Congress

That sophisticated, specific Russian 2016 voter targeting effort doesn’t seem to exist