The migrant caravan in Mexico and the attempted mail bombings of major political figures this week have unleashed torrents of false and misleading reports on social media, testing the limits of costly efforts by Silicon Valley to combat disinformation ahead of the 2018 midterm elections.
Despite hiring thousands of employees and investing in teams dedicated to quelling phony information two years after the problem emerged during the 2016 presidential election, the country’s most influential tech companies have struggled to respond.
Facebook, Twitter and Instagram have resisted demands to remove some of the viral conspiracy theories and extremist content — a reflection both of the gravity of the task and of their belief that they should not serve as arbiters of truth.
The attempted pipe-bomb attacks, which targeted former president Barack Obama and others who have been critical of President Trump, were almost immediately characterized in widely shared Facebook and Twitter posts as a conspiracy engineered by Democrats to undermine the conservative cause. Michael Flynn Jr., the son of the president’s former national security adviser, said in tweets to his roughly 98,000 followers that the bombs amounted to a “political stunt.”
Claims that the bombs were a hoax and slurs against one of the bombs’ targets, liberal philanthropist George Soros, also proliferated widely on the Facebook-owned photo-sharing giant Instagram. Social media researcher Jonathan Albright said the Instagram posts amplified conspiracy theories and “some of the worst hate speech, Hillary Clinton memes and violently anti-Semitic messages I’ve seen to date.”
The caravan, a potent symbol of the brewing migrant crisis at the U.S. border, was portrayed by some prominent conservative figures as a violent horde mobilized for invasion, including through the sharing of falsely labeled images showing a bloodied Mexican policeman that was in fact taken elsewhere in the country in 2012.
That image, first posted early Sunday, spread virally on Facebook and Twitter, including through a post by Ginni Thomas, a conservative activist who is the wife of Supreme Court Justice Clarence Thomas.
The hoaxes were amplified by accounts known to echo Kremlin propaganda, according to researchers who say the hoaxes are a form of manipulation they have detected repeatedly on controversial topics since the 2016 election. But the largest sources of disinformation on the caravan and the attempted bombings have come from domestic sources, researchers say.
The continued spread of misinformation this week shows how the sites continue to waver on even the most incendiary views related to potentially real-world violence.
“This is an example of where social media companies have a responsibility not to amplify propaganda that is demonstrably false,” Rep. Ro Khanna, a Democratic lawmaker who represents a part of Silicon Valley, said in a statement. “A newspaper or television station would never claim that the pipe bombs are fake, and they wouldn’t give that perspective the time of day. Similarly, social media companies need to have basic third-party verification so they are not allowing false claims to be retweeted or shared.”
The flood of misinformation has infuriated regulators, who have remained vigilant after other spurious users — including agents of the Russian government — stoked social and political unrest online with divisive messages of their own. To that end, lawmakers are especially wary that these hoaxes and conspiracy theories are resonating, and possibly intensifying, with the 2018 midterm elections less than two weeks away.
The tech industry has struggled to balance calls for combating misinformation with concerns about protecting free speech, especially at a time when conservatives have blasted Silicon Valley for a supposed pro-liberal bias.
“On one side, they are in the position where they really have to be thinking about protecting the public interest. And on the other side, they don’t want to tick off huge constituencies,” said Dipayan Ghosh, a former policy adviser at Facebook and in the Obama White House who is now a fellow at the Shorenstein Center on Media, Politics and Public Policy. The leading social media platforms are “far more hesitant to do anything because they’re afraid, they’re very afraid of the backlash they could get from conservatives in this country.”
“We have taken action,” Facebook said in a statement Thursday. “We’ve demoted stories rated false by fact-checkers, like content about police brutality by migrants and pipe bombs, and we’re removing content that violates our policies, like hate speech or support for the bombing attempts.”
Instagram, which belongs to Facebook, didn’t respond to requests for comment.
Twitter said it relies on truthful tweets to correct and neutralize false information on its platform, unless messages break its rules, such as threatening violence. “Accounts that deliberately attempt to disrupt the public conversation, including sharing the same content repeatedly or trying to game trending topics, will face enforcement action pursuant with our policies,” a Twitter spokesman said.
Companies have moved more aggressively than in the past to shut down accounts acting in coordinated, deceptive and “inauthentic” ways while also dramatically stepping up the monitoring of disinformation. Facebook, for example, created a heavily publicized “war room” at its sprawling Menlo Park, Calif., campus to underscore its intensified efforts. It is also developing artificial intelligence that could flag false content or fake accounts, but the wide deployment of such technology is still years away.
But the companies still have difficulty in handling instances of Americans’ using social media to spread their political viewpoints, even when they are rendered in sensationalized ways that may include misleading information. Claims that survivors of a school shooting in Parkland, Fla., were “crisis actors” being paid to build support for gun control spread virally on social media, including climbing near the top of YouTube’s “Trending” list.
Although managing multiple accounts, using fake personas or employing automation can get users suspended from some platforms, the posting of demonstrable falsehoods generally will not. More often, platforms will limit the spread of misinformation if it detected or reported, as opposed to deleting it.
Jonathon Morgan, chief executive of New Knowledge, a security company that tracks online disinformation, said the social media companies have shown some recent success at tackling professional campaigns from state intelligence agencies and terrorist groups. But they have shown little progress or interest in tackling the domestic conspiracy theories and extremist rhetoric that often follows major news events.
“They don’t consider it their responsibility, and even if they did . . . it would be incredibly difficult to police,” Morgan said.
On Thursday, sites such as Twitter remained awash with content suggesting that the pipe bombs had been mailed as part of a “false flag” attack to benefit Democrats. Memes spread on Facebook through shares and likes. A popular right-leaning Twitter user, Candace Owens, questioned the timing of the bombs’ delivery. “Caravans, fake bomb threats — these leftists are going ALL OUT for midterms,” she said in a tweet shared more than 8,700 times. By Thursday afternoon, the tweet had been deleted.
Twitter did not suspend many of the accounts sharing such messages or limit the reach of their content, saying they did not break the platform’s rules.
Still, Twitter accounts known for pushing Russian propaganda appeared to popularize some of the conspiracy theories. On Wednesday and Thursday, accounts aligned with the Kremlin’s views — tracked by Hamilton 68, a project of the German Marshall Fund that monitors social media for Russian manipulation — frequently promoted hashtags including “fakebombgate,” “fakebombs” and “bombhoax.”
Bret Schafer, a social media analyst for the group’s Alliance for Securing Democracy, said these accounts typically “hop on an existing bandwagon” to help boost the reach of hot-button political issues.
Social media posts about the migrant caravan have been particularly rife with misinformation. The network analysis firm Graphika studied 14,000 Twitter accounts that frequently posted about the caravan and found a high level of false and misleading information and images, including of the bloodied policeman. It also found that 22 percent of the posters showed signs of being bots, a term describing accounts that use automation software with minimal human control, signaling an unusually high level of manipulation of the caravan narrative.
“It’s a fantastic wedge issue that’s very close to the midterms and very easy to manipulate,” said Camille François, research and analysis director for Graphika.
The Graphika analysis also showed that many accounts are spreading misleading information about the caravan and the attempted bomb attacks, often by using such popular hashtags as “jobsnotmobs,” popularized within the past week by President Trump.
Twitter suspended some accounts over the image of the bloodied policeman because of the coordinated efforts to spread it, the company said Wednesday, and Facebook made it less likely to spread on the platform after the fact-checking website Snopes labeled it misleading. The account of Thomas, who did not respond to requests for comment, remained active but the post was removed.
Albright, the research director for the Tow Center for Digital Journalism at Columbia University, traced the origin of false allegations about Soros’s funding the caravan to a number of tweets in March and early April. But just in the past few days, multiple posts have used identical language — “Well, now we know who is funding the caravan” — in pushing the claims about Soros.
Compared with disinformation spread by Russian operatives and others in 2016, Albright said, misleading information about the caravan is far more likely to spread among closed networks of influential social media accounts. Often, they use the same words and images copied repeatedly instead of targeting entire groups of people by demographic characteristics, as the Russians did.
“The method here is quite a bit more subversive,” Albright said. “It’s harder to pinpoint and take down.”
Andrew Ba Tran contributed to this report.