Ctrl + N
Presidential candidate Sen. Amy Klobuchar (D-Minn.) is again taking aim at Big Tech -- this time with an audacious idea to tax big technology companies when they share users’ data. But critics are skeptical that such a proposal would be feasible.
As privacy scandals mount at firms such as Facebook, Klobuchar is positioning her data tax as a way to make technology companies think twice about how they share and profit from users’ data. Klobuchar floated the idea at the South by Southwest festival and a privacy hearing this week on Capitol Hill, suggesting that revenue from sucha levy could be used to pay for cybersecurity protections.
But implementing such an idea might be difficult, experts noted. Taxing data means you have to know what such information is worth, for instance. Former Facebook employee Antonio García Martínez says data is often compared to oil, but that's a poor analogy because unlike oil that has a clear price, the ultimate value of data is unknown. Outside of using data to allow targeted advertising, it’s difficult to measure such information's value because that can change dramatically depending on the context.
“I think taxing data is a little bit unrealistic and silly,” García Martínez said. “But I can see how people feel like they’re being stolen from because I give you all this data and I get nothing in return but a slightly better service.”
Klobuchar’s proposal is just the latest sign that Democrats are tapping into that sentiment and trying to elevate regulating the technology industry as a 2020 issue. Her plan and Sen. Elizabeth Warren’s (D-Mass.) proposal to break up Amazon, Google and Facebook indicate some candidates are eager to put a stake in the ground on privacy issues.
But at these early stages, it’s difficult to tell whether these would be actionable plans if either Klobuchar or Warren were to win the White House.
In an interview at South by Southwest, Klobuchar acknowledged there are many details that need to be worked out regarding her plan, and she wants input from technologists.
Since they began selling ads, the largest technology companies have operated on the tacit understanding that you get free access to service in exchange for your data -- ranging from the photos you upload to Facebook to the locations you search for on Google. But following incidents such as the Cambridge Analytica scandal, which showed Facebook data could be leaked to a political consultancy without people’s knowledge, that arrangement is under renewed scrutiny.
Klobuchar told me there’s a huge amount of economic value in the data companies collect, and consumers don’t know whether they’re getting a fair deal.
“Who says if that’s a fair exchange?” she said. “Do you think consumers can determine if that’s a fair exchange? I don’t know if they can.”
To assess value, García Martínez said policymakers could perhaps look at metric tracked by companeis related to advertising. For example, my colleague Geoffrey Fowler reported that Facebook gathered $82 in advertising for each member in North America in 2017. But such a figure doesn’t break down what information is worth on an individual basis, or its value to the company for other uses, such as developing artificial intelligence.
Klobuchar is elevating an issue that has been percolating in the California’s governor mansion and halls of Capitol Hill. California Gov. Gavin Newsom (D) last month proposed a “data dividend” requiring the large technology companies to pay consumers for their data.
Sen. Mark Warner (D-Va.) is planning to introduce a bill in the coming weeks to require large companies to regularly tell users the value of their data and to report the total value of their data holdings to the Securities and Exchange Commission. It’s an issue that has been on Warner’s radar for months, and he even pressed Facebook Chief Operating Officer Sheryl Sandberg on the idea in a hearing last year.
Klobuchar wasn’t familiar with the details of Warner’s plans, but she said her proposal is “along the same lines” as the one offered by Newsom. But while Newsom is proposing a dividend that would go back to people, Klobuchar’s wants the government to receive the proceeds of a data tax that then would be used for cybersecurity and also to reduce the debt.
Klobuchar says such a tax wouldn’t just deter companies from excessively using or transferring people’s data. It would also force them to better track when they are using people’s information.
But experts are skeptical that a tax is the right way to accomplish that.
Dante Disparte, the chief executive of Risk Cooperative, says the United States needs to respond to such concerns with privacy legislation, similar to the Dodd-Frank Wall Street overhaul, for data and tech companies.
“It’s the wrong posture,” Disparte said. “We need a more anticipatory framework that doesn’t just scare companies away and react.”
On Twitter, Georgetown Law staff attorney and teaching fellow Lindsey Barrett said such a plan would just make it more expensive for companies to mishandle people’s data, at a time when lawmakers need to outright ban bad corporate behavior:
Social media companies are scrambling to remove videos of a terrorist attack in New Zealand that killed 49, after the gunman reportedly broadcast 17 minutes of the attack, according to the Associated Press.
Facebook said Friday that it took down the footage after being alerted by police. Twitter and YouTube owner Google said it was also removing footage.
But the video's proliferation across the services and Reddit raised new questions about the companies' ability to police violent content on their services. From my colleague Drew Harwell:
What responsibility do we want these companies to have? On Reddit, one of the most popular sites on the Internet, people have been narrating the video on a forum called "watchpeopledie." After more than an hour, this was posted: pic.twitter.com/C8nmt7CZgh— Drew Harwell (@drewharwell) March 15, 2019
As Ryan Mac of Buzzfeed News noted, the video's spread highlighted some of the problems with the companies' current algorithms and content moderation practices:
What's bizarre is that YouTube's algo or moderators have flagged these videos as sensitive, but users are still allowed to watch them after consenting. This is the screen you see before viewing. How is broadcasting mass murder not a violation of terms of service? pic.twitter.com/5o1nIRI5dP— Ryan Mac (@RMac18) March 15, 2019
Despite Twitter's earlier commitment to taking down the video I'm still seeing clips, including one shared from a verified account with 694K followers. I'm not sharing it here, but it's been up for two hours.— Ryan Mac (@RMac18) March 15, 2019
Even this morning, Drew could still find the video on YouTube:
It's been eight hours and you can literally still watch this video on YouTube.— Drew Harwell (@drewharwell) March 15, 2019
NIBBLES: Facebook's top product executive, Chris Cox, is leaving the company — marking the highest-level departure at the tech giant as it shifts its strategy toward privacy and messaging, my colleague Elizabeth Dwoskin reports.
Cox, who worked at the company for 13 years, was promoted in a reorganization last year, in charge of the company's “family of apps.” The apps he oversaw — Instagram, Messenger, WhatsApp and Facebook — have been distinct services, but this month Mark Zuckerberg announced the company plans to unify them.
“It is with great sadness I share with you that after thirteen years, I’ve decided to leave the company,” Cox wrote in his post. “Since I was twenty-three, I’ve poured myself into these walls. This place will forever be a part of me.” Cox didn’t offer any explanation for his departure.
BuzzFeed's Ryan Mac is reporting that Cox's departure is related to a disagreement over the company's new direction, according to a source familiar with this situation. Chris Daniels, who was recently elevated to head of WhatsApp, is also leaving the company, Facebook said.
The departures were announced amid a tumultuous 24 hours for the company, in which my colleague Tony Romm noted the company also experienced an almost unprecedented global outage as well as a possible criminal probe.
BYTES: YouTube has promised for years to police inappropriate content, but researchers, parents and consumers say it's still delivering “violent imagery, drug references, sexually suggestive sequences and foul, racially charged language” to children at a high clip, according to my colleague Craig Timberg.
The company's recommendation algorithm doesn't reliably segment content by appropriate age level, and the sheer volume of YouTube content makes it difficult for parents to monitor what their children watch. Advocates for children's privacy say the service's rise represents a significant shift from the highly regulated world of broadcast television.
“YouTube is the biggest pain point for parents today,” said James Steyer, chief executive of Common Sense Media, a San Francisco-based nonprofit group that advocates for families and schools worldwide. “Kids just stumble into completely inappropriate content constantly because it’s algorithmically driven.”
— Technology news from the private sector:
— Technology news from the public sector:
— Tech news generating buzz around the Web: