Ctrl + N

U.S. lawmakers have not been as aggressive as some of their peers in other countries in holding tech companies accountable for harmful content on their platforms. But at a pair of hearings on the Hill this week, they're expected to ratchet up their scrutiny of Silicon Valley's content moderation practices.  

House Judiciary Committee chairman Jerrold Nadler tells my colleague Tony Romm that today's hearing on the proliferation of white nationalism on Facebook and Google's platforms is just a “first stage” — he wants to press the companies to take action first before considering new regulations. 

“Let's see what happens by just pressuring them first,” the New York Democrat said. “I'm reluctant to have regulation of speech. It usually goes too far. I don't know we have to get there yet.”

On the other side of the Capitol, Republicans plan to ramp up criticism of companies' content moderation practices this week — for very different reasons. At a Senate Judiciary Committee tomorrow on allegations of anti-conservative bias, expect Republicans repeat claims that Facebook, Google and Twitter are too focused on taking down content online that results in conservative voices and news being suppressed. While there's so far no evidence of systemic bias against conservative content, the drumbeat of criticism could still increase the prospect of regulation — a threat that President Trump himself has repeatedly made.

The hearings showcase two different manifestations of the forthcoming debate in Washington over how to rein in Big Tech — and the challenge for the companies. Tech companies are under increasing pressure to show how they will keep people safe — while still preserving free speech. And in a such a politicized environment in Washington, they must also show they are committed to fair treatment of all political parties. 

Given how complex these challenges are, it likely means that for the foreseeable future technology companies will be forced to navigate a patchwork of different rules across the globe governing when they need to take down content. 

The challenge on removing hate speech is only growing as recent incidents like the New Zealand shootings or the Pittsburgh synagogue massacre last year put a spotlight on the ways technology platforms can be exploited to amplify violent and hateful messages online.

Yet Nadler resisted the idea of gutting the tech industry's prized federal legal shield, which gives the companies broad legal immunity for the content posted on their platforms. The technology industry has fiercely defended Section 230 of the Communications Decency Act, even though some critics think its time for the law to be overhauled to address the proliferation of violence and hate speech online today. 

“No, that would be a revolution in how social media works,” he said.

“It says to me that there's a felt need, there's a feeling abroad as well as here that social media has been used for bad purposes, has been used to promote racist or hateful doctrines and hate speech,” Nadler said.

Australia recently passed a law that could result in jail time for executives at companies that leave violent content online. In the United Kingdom, lawmakers unveiled a broad blueprint for how they could fine companies for failing to takedown a broad range of harmful speech, from violent content to disinformation. As the U.K. and Australia become some of the first countries to consider such regulations, their actions could influence how policymakers around the world broach content moderation issues.

“If we can put in place a system of regulation that is sensible,” U.K. digital minister Jeremy Wright told Tony in a recent interview. “We won’t be the only country to want to do that.”

Other Democrats on the committee seemed open to considering regulations to address problematic content on the platforms. Rep. Karen Bass (D-Calif.) told Tony regulations should be examined. Rep. Cedric L. Richmond (D-La.) told him that regulation may be necessary to govern how companies address harmful content. He said technology companies should play a role in shaping what that regulation should look like and pitch a proposal. “They better go do it because what they don't want is for us to do it, because we're not going to get it right,” Richmond said. “We're going to make it swift, we're going to make it strong and we're going to hold them very accountable.”

BITS, NIBBLES AND BYTES

BITS: A bipartisan duo called on the Federal Trade Commission to “take action” against Facebook and Google over potential data security and competition violations, The Hill's Emily Birnbaum reports. Sens. Amy Klobuchar (D-Minn.) and Marsha Blackburn (R-Tenn.) asked the agency in a letter if it were investigating Google, and they called on the FTC to provide the public with more details about its investigations into internet companies. 

“We understand that the FTC does not typically comment on nonpublic investigations, but the public discussion surrounding Google and other companies’ conduct have made this a uniquely important national issue,” the senators, who both have worked on tech policy issues, wrote in the letter.

“Accordingly, we respectfully request that the FTC consider publicly disclosing whether it is conducting an investigation of Google and/or other major online platforms and describe, in general terms, the nature of the conduct under examination in any such investigations,” they also wrote. 

Rep. David N. Cicilline (D-R.I.) previously has called for the FTC to investigate Facebook for antitrust violations. The agency is already considering levying a multibillion dollar fine against Facebook as it investigates the company for potential privacy violations following the Cambridge Analytica scandal. 

NIBBLES: The European Union on Tuesday unveiled a new set of guidelines to steer the way companies and governments ethically develop artificial intelligence, according to The Verge's James Vincent. The principles call for accountable, explainable and unbiased AI systems. 

“They don’t offer a snappy, moral framework that will help us control murderous robots,” Vincent wrote. “Instead, they address the murky and diffuse problems that will affect society as we integrate AI into sectors like health care, education, and consumer technology.”

The principles state that AI “should not trample on human autonomy.” They also say AI should be secure and accurate. The recommendations call for data related to AI systems to be stored securely. They also say AI systems should be available to all and should not be biased along the lines of gender, race or other characteristics. 

BYTES: The European Union's data commissioner is investigating the European Commission and other E.U. institutions' software deals with Microsoft to ensure they comply with the region's sweeping data privacy rules, according to Reuters' Francesco Guarascio and Foo Yun Chee. 

The investigation opened Monday underscores how the E.U. General Data Protection Rules, which went into effect last year, are holding companies to a higher bar when it comes to data privacy. 

“The probe will look into the Microsoft products and services used by the institutions and whether the contractual agreements between them and the U.S. software company are GDPR-compliant,” Reuters reported. 

Microsoft told Reuters it was working with the E.U. data authority on the investigation. 

“When relying on third parties to provide services, the E.U. institutions remain accountable for any data processing carried out on their behalf,” Assistant EDPS Wojciech Wiewiorowski said, per the Reuters report. “They also have a duty to ensure that any contractual arrangements respect the new rules and to identify and mitigate any risks,” he said.

PRIVATE CLOUD

— Technology news from the private sector:

Unlike at social networks such as Facebook and Twitter, the people who respond to reports of harassment are largely unpaid volunteers.
New York Times
Facebook is gearing up for upcoming elections in India.
CNET
Twitter just took another big step to help boot spammers off its platform: it’s cutting the number of accounts Twitter users can follow, from 1,000 per day to just 400.
TechCrunch
“Individuals and organizations who spread hate, attack, or call for the exclusion of others on the basis of who they are have no place on Facebook,” a Facebook spokesperson said.
Buzzfeed News
Twitter CEO Jack Dorsey received a total salary of $1.40 in 2018, the social media company said Monday.
The Hill
PUBLIC CLOUD

— Technology news from the public sector:

Warner and Fischer contend that the data gathered through these practices gives the biggest tech companies a major advantage over their smaller competitors
Axios
White House aides would recommend President Donald Trump veto a bill to restore landmark net neutrality protections if reinstated by Congress, according to a document sent to lawmakers Monday and seen by Reuters.
Reuters
Colorado Gov. Jared Polis (D) is expected to sign net neutrality legislation that bans internet service providers from getting taxpayer money in Colorado if they slow down internet access or unfairly speed up certain websites.
The Hill
#TRENDING

— Tech news generating buzz around the Web:

Technology
Our tech columnist answers your questions about how to block spam, nuisance and fraudulent calls on your home phone.
Geoffrey A. Fowler
Class accounts are a way for incoming freshmen to make friends, find roommates, and suss out colleges before fall.
The Atlantic
It's official: Google launches its world-first drone delivery business in Canberra's north.
ABC
WIRED IN