Ctrl + N
WhatsApp's new tip line for misinformation in India could be a preview for how its owner Facebook tries to tackle fake news as it reorients its business toward encrypted messaging.
The tip line, which allows people to forward suspect messages to an automated account, is among the first public signs Facebook is developing new misinformation-busting techniques after its promise to encrypt messages across its platforms raised concerns about its ability to police harmful content.
The company's move to test this new strategy on WhatsApp — which is already built on encryption so strong the company cannot see the contents of its messages — reveals how the company is looking outside its usual playbook, which largely relies on algorithms and human moderators to screen news on open forums such as Facebook News Feed and Instagram.
“I think it’s going to be an experiment to see how well it works here,” said Ashkan Soltani, former Federal Trade Commission chief technology officer. “They have to do something.”
But experts are skeptical that a tip line will be effective. They raise concerns about the company's ability to respond to reports it receives at a mass scale — and whether people will even report fake news forwarded to them.
“I worry a little bit about it being more of a gimmick than being an effective intervention,” said Matthew A. Baum, a professor of public policy at the John F. Kennedy School of Government at Harvard University.
Most messages on WhatsApp come directly from people users know or groups who have a more intimate ties than those they might be connected with on a broader social network service such as Facebook or Twitter. “It’s a harder nut to crack,” Baum said.
Still, it could be an interesting microcosm for testing such a strategy — especially as misinformation is spreading virally on the messaging app ahead of India’s elections later this month.
WhatsApp partnered with the India-based start-up Proto to create the system, in which an automated account will let people know whether their flagged message is verified, not verified, or “out of scope.”
The reports could also provide useful data that inform the company’s understanding of the types of falsehoods that are proliferating.
“The goal of this project is to study the misinformation phenomenon at scale — natively in WhatsApp. As more data flows in, we will be able to identify the most susceptible or affected issues, locations, languages, regions, and more,” Proto's founders Ritvvij Parrikh and Nasr ul Hadi said in a statement. Proto plans to share its finding with the International Center for Journalists to help other organizations learn from this project.
However, with more than 200 million people in India using WhatsApp, it remains to be seen how quickly and effectively the tip line will be able to respond to flagged misinformation. The Wall Street Journal’s Newley Purnell submitted several messages to the tip line yesterday as it launched. After 20 hours, he had not received a response:
Another update: it's been about 20 hours since we submitted the first of our several messages to WhatsApp's India tip line, which launched yesterday to help debunk dubious content. No classifications received yet. Stay tuned. https://t.co/TwT9RkJp0H— Newley Purnell (@newley) April 3, 2019
Soltani compared this tip line technique to the way offices asked employees to forward every spam message to their IT guy in the early 2000s. Soltani said it didn’t work then, and he’s skeptical it can work on a much larger scale in India.
Facebook, Soltani says, is "trying everything" as it races to fight misinformation during its shift to encryption.
Chief executive Mark Zuckerberg admits quelling misinformation on encrypted messages is trickier than on other platforms -- but has pledged to find a way. The company unveiled a feature that would limit message forwarding to only five times on WhatsApp to address similar concerns.
BITS: Congress is set to take an early step aimed at cracking down on robo-calls, which are seeing renewed attention on Capitol Hill after they rang Americans' phones 26 billion times last year, my colleague Tony Romm reports.
"The newly revived effort in the Senate takes aim at those who disguise their attempts to steal Americans’ personal information — often by using phone numbers that appear similar to those they’re trying to target," Tony wrote. "These fraudulent, illegal calls comprised roughly a quarter of the 26 billion robo-calls placed to U.S. mobile numbers last year, according to one industry estimate."
The bipartisan legislation, known as the TRACE ACT, is scheduled to come before a tech-and-telecom subcommittee of the Senate Commerce Committee today for an early vote that's expected to pass. The bill would encourage carriers like AT&T, Verizon and others to invest in technology that would help their customers identify if calls are real or spam, and it will also give the government more power to hit these companies with fine.
Tony also explained why it's challenging for Congress to block robo-calling companies altogether, even though that would be politically popular:
i get this q a lot. and it's a great q with a complicated answer.— Tony Romm (@TonyRomm) April 2, 2019
first, the reality is a boatload of robocalls come from legit biz, like Capital One, American Express, student lenders, mortgage companies and others that want MORE ability to auto-dial, not less. https://t.co/pL8cwvO6Vg
NIBBLES: Google said it will require the outside companies employing its contractors and temporary workforce to provide full benefits to these workers — including health care, a $15 minimum wage and 12-week paid parental leave, according to an internal memo provided to The Hill's Emma Birnbaum. The suppliers will need to implement the minimum wage requirement by January, and ensure health coverage is available by 2022, a Google executive said.
Google announced the new policy as more than 900 Google workers signed a petition calling for equal treatment of these contractors, known within the company as “TVCs.” These TVCs account for 54 percent of the company's workforce, or about 122,000 positions, according to the workers' letter.
The benefits do not extend to self-employed independent contractors, but they will impact companies that supply workers that work in Google's cafes, transportation services and other positions.
“If folks don’t meet the standards by the deadline, then business decisions will need to be made, and then we’ll need to continue to audit our suppliers through perpetuity to make sure that people are still meeting those standards,” a Google spokeswoman told The Hill.
BYTES: YouTube chief executive Susan Wojcicki and her deputies reportedly told employees not to rock the boat when they raised concerns about the proliferation of toxic videos, conspiracy theories and incendiary content on the website, Bloomberg's Marc Bergen reports.
Bergen spoke with more than 20 current or recently departed YouTube employees who "reveal a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement." One employee suggested flagging troubling videos that fell short of hate speech rules and stop YouTube's algorithms from recommending them to viewers. Another employee created an internal vertical to show just how popular "alt-right" video bloggers were.
Wojcicki would “never put her fingers on the scale,” one person who worked for her told Bergen. “Her view was, ‘My job is to run the company, not deal with this.’”