with Tonya Riley

Ctrl + N

The tech industry is warning lawmakers that their plans to overhaul a key Internet law could actually make it harder to crack down on harmful content on their platforms. 

Big Tech is arguing that Section 230 of the Communications Decency Act, which gives tech companies legal immunity for the content people post on their platforms, is what allows them to remove posts the companies consider to be terrorist, violent, or harassing -- without as much concern offensive posters will sue them. They say stripping or changing this immunity -- as politicians on both sides of the aisle increasingly want to do -- would give a legal advantage to trolls and extremists. 

“Section 230 is essentially a Good Samaritan provision,” Edward J. Black, president and chief executive of the tech industry trade group Computer & Communications Industry Association, said. “It ensures online services — and anyone with a website — can quickly remove extremist content without risk of being sued for their efforts to stamp out bad actors."

Washington and Silicon Valley are now embroiled in what's essentially a messy debate over free speech in the digital age. Republican lawmakers want to update the 1996 legal shield to address allegations of bias and suppression of conservative speech. It's also in the crosshairs of Democrats who are concerned about the spread of disinformation and fake news. But as Silicon Valley is now making clear, heavy-handed changes to alter this law could have troubling ripple effects -- and usher in a whole host of other problems that could turn social media into an unprotected Wild West. 

The introduction of a bill from Sen. Josh Hawley (R-Mo.) yesterday that would revoke the legal immunity from big tech companies unless they could prove to the Federal Trade Commission that they're poltically neutral set the debate over Section 230 into overdrive.  

Kim-Mai Cutler, a partner at the San Francisco-based venture capital firm Initialized Capital, explained the tech industry's concerns succinctly: 

And the tech industry is now working to illustrate how the provision has shielded companies who were trying to remove hateful content.

An often-cited example: A white nationalist sued Twitter, claiming the company violated his free speech rights by banning him from the service after offensive posts. Twitter won the significant legal battle last year when a California state appeals court ruled in its favor -- because it was protected under Section 230. 

The recent clash also underscores the deep divisions between Democrats and Republicans. At its core, Republicans say the companies have gone too far in content moderation -- at the expensive of conservatives. But many Democrats who are floating Section 230 changes say the companies haven’t intervened enough, allowing violence, hate speech and disinformation to fester on their platforms. They've pressed the companies on whether they're doing enough to remove content like videos of the mass shooting in New Zealand earlier this year. 

As Jeff Kosseff, an assistant professor of cybersecurity law at the Naval Academy points out, there are two polar opposite criticisms of Section 230: 

Sen. Ron Wyden (D-Ore.), one of Section 230's original authors, seems to agree with Silicon Valley that stripping the immunity would make the discourse on these platforms even worse. 

"This bill would essentially force every platform to become 4chan or 8chan rather than maintain some basic level of decency," Wyden said in a statement to The Technology 202. "What CDA 230 actually does is enable private sector companies to take down inappropriate third-party posts without incurring liability."

Hawley's office stood behind its bill following the criticism yesterday. 

"Google, Facebook and Twitter currently get a massive benefit from government that no other entity gets—immunity from liability," Kelli Ford, a spokeswoman for Hawley said in a statement. "Senator Hawley’s bill says that if they want to keep that special benefit, they have to adhere to the First Amendment standard of no viewpoint discrimination. They are free to do whatever they wish, of course. If these companies prefer to advocate a particular viewpoint or penalize other views, they certainly can. But they shouldn’t get 230 immunity while doing it.”


BITS: The Federal Trade Commission is investigating YouTube for its treatment of children's content, my colleagues Elizabeth Dwoskin, Tony Romm and Craig Timberg reported yesterday. The probe “threatens the company with a potential fine” and already has provoked YouTube to reconsider some of its current business practices, people familiar with the matter tell them. 

Sen. Edward J. Markey (D-Mass.), who is planning to release his own legislation to rein in YouTube, welcomed the investigation. “An FTC investigation into YouTube’s treatment of children online is long overdue,” wrote Markey in a statement responding to the story. “It’s time for the adults in the room to step in and ensure that corporate profits no longer come before kids’ privacy.” The company has endured increased scrutiny in recent weeks over claims that its algorithms sexualize children, in addition to years of allegations from consumer advocates that the site illegally collects childrens' data. 

The company is weighing moving all its children's content into a separate product, the Wall Street Journal's Rob Copeland reported yesterday, and some employees are pushing the company to disable the auto-play feature on children's videos. 

The YouTube investigation is a major test of the FTC's ability to police Silicon Valley. 

“The FTC has been under immense pressure from congressional lawmakers and privacy advocates to be tougher on big tech companies, particularly when it comes to its enforcement of decades-old children's privacy rules, known as COPAA," Tony wrote in an email to The Technology 202. "The FTC under its GOP leader, Chairman Joe Simons, repeatedly has pledged to issue stronger penalties for privacy abuses, including COPPA. The commission sought to demonstrate this through its $5.7 million settlement with the company now known as TikTok, though Democrats wanted the FTC to go further. It faces the same test now with respect to Google and YouTube, after nearly five years of complaints from consumer advocates.” 

NIBBLES: Two federal sexual harassment cases, a worker death and unsanitary working conditions all plague Facebook's contract moderators at a Tampa worksite, Casey Newton at The Verge reports. Several former Facebook moderators broke their non-disclosure agreements for the first time to provide a look into the day-to-day experience of the contract employees who make $15 to remove child porn, organ harvesting videos and an onslaught of other violating content.

The former workers described grim conditions at the Tampa site, with a single bathroom that "has repeatedly been found smeared with feces and menstrual blood." One contractor in Tampa had a heart attack and died in the office last year, and the managers initially told employees not to discuss this incident "for fear it would hurt productivity," Casey writes. 

One contractor in Tampa tells Casey he was diagnosed with PTSD after spending months flagging hate speech, violent, and other offending content. Cognizant, the contractor that runs the Tampa moderation site, systematically purged employees to deal with burnout. 

Facebook tells Casey it will launch a “audit program” this year to promote better working conditions and keep management at the sites accountable. But ultimately through using contractors, companies like Facebook are able to “hold its contractors at arms length,” Casey argues.

Casey asked Chris Harrison, who leads Facebook's “global resiliency team” tasked with improving the well-being of workers, if the company would ever seek to limit the amount of disturbing content a contractor views in a day. 

“I think that’s an open question,” he said. “Is there such thing as too much? The conventional answer to that would be, of course, there can be too much of anything. Scientifically, do we know how much is too much? Do we know what those thresholds are? The answer is no, we don’t. Do we need to know? Yeah, for sure.”

BYTES: A shareholder resolution that would have forced Google to conduct a human rights audit of its controversial Chinese search engine project failed to receive enough votes to pass yesterday,TechCrunch's Zack Whittaker reported.  

Google leadership opposed the resolution, claiming that its offerings in China are “consistent with [its] mission.” Questions about the search engine project have been raised by lawmakers in both parties since it was first revealed by The Intercept last fall. 

Facebook and Amazon shareholders have also used resolutions at the respective companies’ most recent board meetings to tackle civil rights issues, including facial surveillance technology, though none were successful. 

It’s unclear yet how many votes short the Google resolution fell sort of, but Wall Street Journal reporter Rob Copeland reports shareholders didn’t have much time to act:


— News from the public sector:


— News from the private sector:


—  Tech news generating buzz around the Web:


— Talent news at the intersection of the tech industry and Washington

  • Former Securities and Exchange Commission senior counsel Michelle Bond joined payment platform Ripple as head of government relations, according to a statement from the company.


Coming up:

  • The House Homeland Security Committee will host a hearing on Artificial Intelligence and Counterterrorism on June 25 at 10 a.m.
  • The House Homeland Security Committee will bring in representatives from Facebook, Google, and Twitter to discuss their company's efforts to address terror content and misinformation on June 26 at 10 a.m.