Voters in the United Kingdom heading to the polls today had their own taste of the dark ways social media can be leveraged to influence an election.
Reddit uncovered a foreign influence operation linked to Russia. There’s been a wave of misinformation on Facebook. Politicians and political parties engaged in online trickery themselves -- manipulating a Twitter account and even doctoring a video.
“No one of them seems to be an election-threatening event, but taken together they’re very worrisome,” Paul Barrett, deputy director of the Center for Business and Human Rights at New York University, told me. “That’s because they demonstrate that even if the social media companies are a lot better prepared, there’s still a decent chance of disruptive events taking place.”
The storm of activity is testing the investments that American tech giants have made in safeguarding elections worldwide, following Russian interference on their platforms in the run up to the 2016 U.S. elections.
And some company missteps are raising questions about whether or not social media giants have done enough to curb misuse of their platforms. “The investments they’ve made are very well justified,” Barrett told me. “But it’s not clear that even what they’ve done is enough.”
One cause for concern: Facebook’s ad system crashed in the critical days leading up to the U.K. election. The database that tracks all political ads on the platform was one of the social network’s key responses to revelations that Russian actors purchased ads to target U.S. voters with divisive issues.
Tens of thousands of political ads were no longer visible earlier this week, according to researchers who spoke to CNN. Figures that track individual politicians’ spending on the platform also disappeared, they said.
A crash like this, experts warn, makes it difficult to do a proper analysis of how social media ads are impacting the election. That's a big problem in light of Facebook's policy not to fact check politicians' ads on its own.
Facebook says the situation has since been resolved, but they could not tell CNN how many ads were affected. “We have fixed the bug and all of the impacted ads are now back in the Ads Library,” Facebook spokesman Andy Stone told The Technology 202 in a statement.
“The point of [Facebook's] Ad Library is to provide transparency to users and more information to voters ahead of their choices,” said Nina Jankowicz, a disinformation fellow at The Wilson Center, in an email. “It is already a fairly clunky and hard-to-use feature; if we can't trust that it is displaying all the information it is supposed to all the time, then what transparency does it truly provide?”
The Guardian also reported that there have been issues with other tech companies’ efforts to track election ad spending, part of a broader transparency push in recent years.
In one November report, Google incorrectly claimed that the Labour Party spent just £50 on advertising during the week of October 27, and nothing the following week. But the party actually spent £63,900 in those two weeks, at least 1,000 times more. The company confirmed that particular figure was inaccurate and said it was reviewing its reports.
“We are looking into this issue. If we find any ads that were mistakenly underreported, we will add them into our transparency report as soon as possible,” the company told The Guardian.
Snap also confirmed a flaw in its own ad transparency report led to incorrect reports about Conservative Party spending in one race, per the Guardian. Snap released a corrected version of the report.
“It has been unfortunate that Facebook, Google and Snapchat have all separately experienced problems relating to data,” said John Crowley, editorial director at First Draft, a nonprofit group that investigates online misinformation.
“Having confidence in the platforms’ political ad libraries is hugely important for journalists and other interested parties to be able to faithfully carry out a full and proper analysis of who is advertising to whom,” he said in an email.
The ad library issues may be a warning flare as the 2020 elections ramp up in the U.S., and researchers and journalists try to track an increasing swell of political spending and misinformation. Barrett, who wrote a report earlier this year on the ways disinformation might impact the 2020 election, said the tech companies can learn from the U.K. elections.
The companies should be "stepping up their degree of vigilance” and “making sure they are extremely well staffed," he told me. “They need to do a lot more of what they’ve already done."
BITS, NIBBLES AND BYTES
BITS: President Trump’s re-election efforts have retained the services of Phunware, a Texas-based firm that specializes in collecting phone location data, The Intercept’s Lee Fang writes. The services highlight the broad range of tracking tools campaigns are leveraging to target voters.
The move was first announced in a press release in October, touting “new and existing customer wins including American Made Media Consultants,” the firm established this year by Trump campaign manager Brad Parscale to handle advertising services for a variety of official Trump reelection PACs. The release, which went largely unnoticed at the time, was signed in conjunction with the Trump-Pence 2020 reelection effort.
The company claims to offer a wide variety of tracking services, such geofencing. This technique could be used to target individuals who attend a particular event, such as political rally or protest.
“Unfortunately Phunware does not comment on customer-specific data or information,” wrote Brent Brightwell, a spokesperson for Phunware, in an email to The Intercept. “Please contact the Trump reelection campaign directly should you have any questions about their activities or efforts.” The Trump campaign did not respond.
NIBBLES: YouTube will now remove videos that insult people based on “protected attributes” including race, gender expression or sexual orientation, my colleague Taylor Telford reports. But critics are skeptical that the policy will result in meaningful changes.
Journalist Carlos Maza, who has been a subject of ongoing homophobic harassment by a conservative YouTuber, said on Twitter:
TL;DR: YouTube loves to manage PR crises by rolling out vague content policies they don't actually enforce.— Carlos Maza (@gaywonk) December 11, 2019
These policies only work if YouTube is willing to take down its most popular rule-breakers. And there's no reason, so far, to believe that it is.
YouTube declined to remove the videos targeting Maza this summer, removing advertisements from them instead. The refusal sparked an outcry from LGBT rights advocacy groups. The content would now run afoul of YouTube's anti-harassment policies, which will also include “veiled or implied threats.”
“We’re glad that YouTube is acknowledging the harm that demeaning speech does to marginalized communities but this policy will only be meaningful if it is implemented robustly,” said Madihha Ahussain, special counsel for anti-Muslim bigotry at Muslim Advocates and member of the Change the Terms coalition.
Ahussain pointed out that videos that promote “false stereotypes and conspiracy theories about Muslims and other groups” continue to proliferate on the platform, despite YouTube's crackdown on hateful and white-supremacist content this summer.
YouTube did not immediately respond to questions from The Post about when enforcement would begin or what strategies would be adopted to meet the new standards.
BYTES: YouTube will explicitly ban content and comments aimed at misleading users about the 2020 Census, the company announced in a blog post yesterday. The new policy come as tech companies shore up for a potential wave of misinformation around the nation's first fully digital census.
The U.S. Census Department will also join the YouTube Trusted Flagger program, which allows government agencies and other approved users to fast-track reports of content violations. The Census Bureau has set up similar partnerships with Facebook's fact-checking partners.
Google also threw its support behind a resolution to ensure the census count is fair and accurate spearheaded by Sens. Brian Schatz (D-Hawaii) and Lisa Murkowski (R-Alaska).
Facebook has also promised to ban misinformation surrounding the census, and has convened experts to iron out its policies, as I reported last month. Twitter's policies prohibit sharing false information about how to participate in a civic event, but it has yet to release a census-specific policy, despite calls from 58 lawmakers last month pressing the company to do so.
— News from the private sector:
-- The American Civil Liberties Union is going to court to force Customs and Border Protection and Immigration and Customs Enforcement turn over documents about their controversial cellphone surveillance technology, Tech Crunch's Zack Whittaker reports. The documents could provide key insights into whether the multimillion-dollar surveillance program meets constitutional and legal requirements.
The group is asking the court to compel the agencies to turn over training materials and guidance documents related to StingRays — devices that impersonate cellphone towers, allowing operators to collect cellphone location data. The ACLU is also requesting documents that detail where and when the devices were deployed. Civil liberties groups have long raised concerns about the potential privacy violations raised by the devices.
The ACLU first filed a Freedom of Information Act request for the documents two years ago. CBP does not comment on pending litigation as a matter of policy, spokesman Nathan Peeters told TechCrunch. A representative for ICE did not comment.
— News from the public sector:
— Tech news generating buzz around the Web:
- Outgoing assistant director for cybersecurity at the Department of Homeland Security Jeanette Manfra will join Google Cloud as global director of security and compliance in January, CyberScoop's Sean Lyngaas reports.
- PayPal's exiting Chief Operating Officer Bill Ready will join Google as the company's new commerce chief, Sarah Perez at TechCrunch reports.