Good morning! I’m Cat Zakrzewski, a tech policy reporter at the Washington Post. I’ll be at the helm of The Cybersecurity 202 these next few weeks. If you can’t get enough of Post newsletters, sign up here for my forthcoming newsletter, The Technology 202. You won’t want to miss our daily analysis on the complex relationship between Washington and Silicon Valley, coming to your inbox in December.
There’s even more phony or misleading political news circulating on social media than there was in 2016, according to a new report that casts doubt on tech companies’ attempts to crack down on disinformation ahead of the midterms.
The University of Oxford report also found that social media users were more apt to share “junk news” than what researchers considered “professional content,” which includes news from established media outlets and information from the government, academics or political candidates. What’s more, the report concluded that the kind of junk news once relatively contained to people on the far right is now being readily shared by mainstream political conservatives.
Two years after revelations that Russia orchestrated a wide-ranging campaign to influence the presidential election in favor of Donald Trump through hacking and fake news, the report highlights the complexity of reining in disinformation online. With less than a week remaining before the midterms, the research suggests the efforts that Facebook, Twitter and other companies have taken to suppress disinformation may be too little, too late. Lawmakers have made it clear that if technology companies are unable to police their platforms and address the spread of false or phony information that could influence the democratic process, they may pursue regulation.
The study, released at midnight Thursday, analyzed 2.5 million posts on Twitter and nearly 7,000 Facebook pages over several weeks in September and October. Researchers drew their sample by searching posts tagged with relevant political and election hashtags, and the handles of political parties.
Overall, it found the proportion of junk news circulating over Twitter increased by 5 percentage points since the 2016 presidential election — and made up about 25 percent of all URLs the researchers analyzed. “In comparison, links to professionally produced news content accounted for nearly 19% of shares,” the report found.
Junk news “is going to be part of our media system until social media platforms figure out what to do or are guided on what to do from public policy oversight,” said Philip Howard, the Oxford professor who led the study.
But the research also highlights a key challenge for technology companies, which are now charged with identifying which content is objectionable — at a time when there is no consensus on what should be considered junk news or disinformation.
For instance, the study classified Breitbart — a far-right news site once led by President Trump’s former adviser Stephen K. Bannon — as junk news. Stories on Breitbart, Howard said, fit at least three of the five “junk news criteria,” which include failing to meet the standards and best practices of professional journalism, using emotionally driven language, and relying on false information or conspiracy theories, highly biased reporting or counterfeit of established news outlets.
But Twitter pushed back on some of the classifications for junk news that the researchers made. Many Americans consider Breitbart a legitimate news outlet. The company defended its decision not to ban all the links the researchers classified as junk as it touted its broader efforts to combat disinformation online.
“Many of the links deemed as ‘junk’ by the researchers are media outlets that reflect views within American society,” Twitter said in a statement. “Banning them from our service would be a knee jerk reaction and would severely hinder public debate, the potential for counter narratives to take hold, and meaningful discussion of news consumption.”
Facebook said the report’s findings were “misleading” and disputed its methodology. "The central takeaway of this study --that 'the proportion of junk news circulating over social media has increased since 2016' -- is misleading and is a finding the researchers have arrived at based on Twitter data and applied to 'social media more broadly in their abstract and conclusion," Facebook said in a statement.
The researchers conducted the study by pulling a sample from Twitter to generate a list of junk news, and then looking on Facebook to see what audience on that platform shared those items. The company pointed to other recent studies that contradict the Oxford University researchers' findings, including one that showed users' interactions with fake content have decreased on Facebook while increasing on Twitter.“We’ve seen Facebook-specific takeaways from other academic bodies that help paint a clearer picture of the state of false news on Facebook,” the company said in a statement.
While much of the attention in Washington and Silicon Valley has focused on how to stop the spread of disinformation from foreign actors, Howard notes the reality is much more complex. Many people in the United States have learned from Russia’s playbook in 2016, he says, and have started to disseminate divisive, fake or inflammatory propaganda for political or financial gain – particularly on the far right of the political spectrum.
“They have seen what a headline in all caps letters or a slightly doctored photo can do to boost your site revenue,” Howard said. “This way of doing political communication has migrated to the U.S.”
But it’s easier for American tech titans to talk about policing their platforms for foreign interference than it is domestic propaganda. They want to be seen as tough on disinformation while still avoiding criticism that they silence voices on one side of the political spectrum. Leading Republicans — including President Trump — have said the liberal bias among tech company workers has led to censorship of conservative voices on their platforms.
The companies have denied these charges. And Twitter signaled the stakes are different when the political information is created or shared by people in the United States. “They are also not foreign, not bots, and for the most part not coordinated,” Twitter said. “They are real people sharing news that reflects their views.”
Twitter also said that the study was done using a developer tool, called an API, which allowed the researchers to pull data from the platform — but does not show the steps the company takes to make sure “spammy behavior” does not appear in trending posts, search and the general conversation.
Facebook and Twitter have been more vocal in recent months about their efforts to address “fake news” spread on their platforms.
Though they have resisted the idea of becoming Internet censors, Twitter and Facebook have adopted policies aimed at preventing disinformation from spreading widely and stepped up their efforts to stamp out fraudulent accounts. Facebook now has partnerships with independent third-party fact-checkers, and it is testing fact-checking technology for photos and videos that could identify visuals that have been manipulated. Twitter just launched a media literacy campaign with The United Nations Educational, Scientific and Cultural Organization.
As one recent example, Facebook last week suspended 82 pages, groups and accounts that had originated in Iran for engaging in “coordinated inauthentic behavior.” Twitter said it removed a small number of accounts based on information Facebook supplied about the campaign.
But the companies have compared the battle to remove fake news to an “arms race” as foreign actors get more sophisticated in their approach.
Though Howard compared the companies’ strategies to combat it as an ineffective “game of whack-a-mole,” he conceded that the nature of junk news is evolving. In 2016, he said, fake posts focused on individuals, such as Trump’s Democratic challenger Hillary Clinton. Now, fake news has expanded to a broader set of politically charged issues, such as immigration.
As The Post’s Drew Harwell, Tony Romm and Craig Timberg reported last week, the migrant caravan making its way to the U.S. border and the attempted mail bombings of major political figures both intensified the number of false and misleading reports on social media. “Misinformation has gone from being about particular candidates to being about particular issues,” Howard said.
|You are reading The Cybersecurity 202, our must-read newsletter on cybersecurity policy news.|
|Not a regular subscriber?|
PINGED: The Trump administration on Wednesday insisted law enforcement and intelligence agencies will be sharing information about any possible threats with state officials on Election Day next week. “We're going to need to separate fact from fiction around the federal interagency that day to really understand if any of those authorities or capabilities need to be brought to bear," a senior administration official said on a call outlining the administration's election security plans.
The Department of Homeland Security will also be on-call to assess whether any polling locations have any security vulnerabilities or respond instantly to reports of hacks. “And I can tell you, the progress we’ve made since 2016 working with those state and local election officials is immense,” another official said on the same call. “We’re not just able to push information down, but we’re receiving a great deal of information back from the state and local officials that allows all of us here at the federal government to understand the threats that are targeting election systems and respond appropriately.”
PATCHED: National security adviser John Bolton confirmed that the U.S. is conducting 'offensive cyber operations' to ensure next week's midterm elections are not disrupted. The Washington Post's Ellen Nakashima and Paul Sonne reported that even though “Bolton did not specify the operation’s nature, U.S. Cyber Command has begun signaling to Russian operatives that their identities are known — an implicit warning not to attempt to disrupt American politics.” Additionally, Ellen and Paul wrote that Bolton later "confirmed that the offensive activity fell below the level of armed conflict — it does not result in death, damage or destruction — in that it did not ‘require the type of sign-off’ from senior officials that would normally precede the use of military force.”
Bolton also said that the Trump administration earlier this year adopted a classified directive “that effectively reversed the Obama administration view on offensive cyber operations” and eased “procedural restrictions” governing the launch of such operations. He said, however, that the new policy doesn't amount to a “no-holds-barred environment” and that decision-making procedures remain for launching offensive operations in cyberspace. “The objective here is not to have unrestricted cyberwarfare,” Bolton said. “The objective is to create structures of deterrence by making our adversaries understand that when they engage in offensive cyber activities themselves, they will bear a disproportionate cost, so that they think about it a lot harder before they launch a cyber operation to begin with.”
PWNED: The Trump administration still lacks a clear strategy to combat disinformation operations even though the midterm elections are fast approaching, Politico's Eric Geller reported Wednesday. “In the absence of high-level White House coordination, the administration is letting individual agencies such as the FBI, the CIA and the Department of Homeland Security make decisions about how to respond to foreign governments’ attempts to use social media and other propaganda to undermine U.S. elections, according to people who have been briefed on or participated in the administration’s discussions of the issue,” Geller wrote. “That means broader strategic questions remain unresolved because of White House turf wars, agencies’ competing priorities, political sensitivities and a lack of experience with a relatively new threat, the people say.”
Moreover, differences of views in the government over how to handle election security complicate the federal response, according to Politico. “The FBI treats election security probes like digital crime scenes, where investigators must preserve evidence and keep facts secret so prosecutors can build a case,” Geller reported. “DHS handles election-focused influence operations and cyberattacks like disaster zones, where public awareness and public-private cooperation can resolve incidents and reduce future risks. And CIA and NSA spies vigorously oppose declassifying their intelligence and sharing it with outsiders like state and local officials, fearing that publicizing what the U.S. knows would compromise the sources and methods used to learn it.”
— Sen. Ron Wyden (D-Ore.) on Wednesday scolded Director of National Intelligence Daniel Coats for not giving an unclassified answer to a group of Democratic senators seeking to understand more about the basis for Trump's comments about Chinese efforts to interfere in American elections. Coats only gave a classified response to inquiries from Wyden, Sen. Martin Heinrich (N.M.) and Sen. Kamala D. Harris (Calif.), according to a statement from Wyden's office. All three senators sit on the Senate Intelligence Committee.
“You can't have it both ways,” Wyden said in a statement. “If the president is making public statements about intelligence issues, there's no excuse for the DNI to hide under his desk." Wyden, Heinrich and Harris asked Coats in an Oct. 4 letter whether the intelligence community agreed with Trump's comments that “China has been attempting to interfere in our upcoming 2018 election.” “I'm not asking for every word of the letter to be declassified,” Wyden said Wednesday. “But at the very least, the DNI should say publicly whether or not the president's statements are consistent with the government's intelligence assessments.”
— Despite concerns about potential foreign interference, “the 2018 midterms will be the most secure elections we’ve ever held” and Americans should not sit out the election, according to David Becker, executive director and founder of the Center for Election Innovation and Research. In an opinion piece for The Washington Post on Wednesday, Becker urged voters to trust the electoral process and said that election officials throughout the United States have worked to improve election security for the past two years.
“Russia’s efforts have driven an unprecedented response from federal, state and local officials charged with securing our election systems,” Becker wrote. “Time and time again, secretaries of state and state election directors from both parties have made clear that while the threat on our elections is real, they are singularly focused on and prepared to address it in 2018 and beyond.”
— More cybersecurity news from the public sector:
— Michael Hayden, a former director of the National Security Agency and the CIA, told the Wall Street Journal's Adam Janofsky that intelligence agencies tend to classify too much information about cybersecurity threats that could prove useful for businesses. “Organizations such as the CIA and NSA keep too much information secret and for too long out of caution, he said,” Janofsky reported on Wednesday. “Sometimes, this leaves companies more vulnerable than necessary. ‘The instinct [of intelligence agencies] is to preserve sources and methods, even if you’re not using the information you’ve gained. That needs to be overcome,’ he said.”
— More cybersecurity news from the private sector:
- Trump is scheduled to receive a briefing on election integrity at the White House.
- CyberCon 2018 organized by Fifth Domain in Arlington, Va.
- The National Institute of Standards and Technology hosts the 2018 Cybersecurity Risk Management Conference on Nov. 7 through Nov. 9 in Baltimore.
How social media is changing the presidency:
The life of the first woman Supreme Court justice:
Kanye West and conservatives, explained: