Welcome back to The Technology 202! We’re finally ready to put those heated “best Thanksgiving side dish” debates behind us, but if you’re still not, we’re all ears: cristiano.lima@washpost.com and aaron.schaffer@washpost.com

Below: Australia is setting its sights on social media trolls, and a CEO from Meta not named Mark Zuckerberg will testify before Congress. First up: 

Congress is weighing changes to Section 230, again. Here are what bills stand a chance.

For years, lawmakers in Washington have threatened to overhaul Section 230, the law that shields websites from lawsuits over user content, with almost nothing to show for it. 

A House hearing Wednesday may reveal whether any of the proposals on the table have enough traction to become law — or whether Congress needs to go back to the drawing board. 

The session, hosted by the House Energy and Commerce Committee’s communications and technology panel, is expected to focus on a raft of bills introduced by members of the committee, including ones targeting algorithmic amplification, civil rights abuses and digital ads.

Here’s our breakdown of the bills under consideration and what their chances are of actually going anywhere:

Platforms’ recommendation algorithms are increasingly under fire

The push to open tech platforms up to liability for amplifying certain types of harmful content has gained the most momentum of any Section 230 effort this year, including by drawing a public endorsement from Facebook whistleblower Frances Haugen

Most significantly, top Democrats on the key Energy and Commerce Committee, which holds primary jurisdiction over Section 230, are leading a bill to open platforms up to lawsuits for making personalized recommendations to users that lead to their harm. The legislation is similar to another proposal, led by Reps. Anna Eshoo (D-Calif.) and Tom Malinowski (D-N.J.), which zeros in on recommended content tied to civil rights abuses or international terrorism. 

While neither of the bills currently has backing from Republicans, the concepts behind them have bipartisan support. A discussion draft bill unveiled in July by two top House Republicans would open large platforms up to liability when they use algorithms to “amplify, promote or suggest” content to users, without their knowledge or consent. Another bill from Sen. Marco Rubio (R-Fla.) would also hold companies accountable for algorithmically amplifying content

For lawmakers looking to advance their Section 230 bills, gaining bipartisan, bicameral support will be crucial, particularly given the higher threshold for passing bills in the Senate. But both are tall tasks.

For the bills targeting how algorithms amplify content, a major hurdle will be whether Republicans and Democrats can agree on whether that carve-out should apply to all content or a smaller subset, such as posts that lead to harm. And they’ll need to find champions to lead the charge in the Senate, where some of the bills don’t yet have counterparts.

More content-specific carve-outs could be on the way

While there’s a growing focus on how content is amplified, plenty of lawmakers are still pushing to open platforms up to more liability for hosting certain types of noxious or harmful content. That playbook mirrors that of FOSTA-SESTA, the law passed in 2018 that gave state officials and victims more power to sue websites for hosting sex trafficking on their platforms. 

Democrats have increasingly zeroed in particularly on how digital platforms can enable civil rights abuses, including through targeted online advertising. That’s the focus of one bill led by Rep. Yvette Clarke (D-N.Y.), the Civil Rights Modernization Act, H.R. 3184, that’s likely to get consideration at this week’s House hearing. 

Sens. Mark Warner (D-Va.), Mazie Hirono (D-Hawaii) and Amy Klobuchar (D-Minn.) introduced a separate bill earlier this year, the SAFE TECH Act, that in part seeks to open companies up to greater liability over civil rights claims. It has gained a companion measure in the House. But those lawmakers face a major obstacle in trying to win over Republicans, who have balked at other Democratic civil rights initiatives.

Instead, there appears to be broader support for carve-outs tied to content that’s universally decried, such as terrorism, illegal drugs, child exploitation and cyberbullying. 

Bills targeting those types of content have been introduced by Republicans and Democrats alike, including the Democratic-led SAFE TECH Act, the bipartisan EARN IT Act and the bipartisan See Something, Say Something Online Act. The EARN IT Act, for one, sailed unanimously out of committee last year in the Senate and has since picked up a House counterpart, making it one of the top contenders to become law.

House Energy and Commerce Republicans also made illegal drug sales, cyberbullying, terrorism and child exploitation a focus in a series of discussion draft bills to revamp Section 230 that they unveiled in July. 

Misinformation, bias and political speech bills are essentially DOA

Don’t expect bills that explicitly target online misinformation or allegations of censorship to get anywhere near the president’s desk anytime soon. 

While Democrats and Republicans have found some agreement about holding platforms more accountable for hosting noxious content, like child exploitative material, they remain diametrically opposed on issues around political speech, misinformation and alleged bias. 

And with the filibuster in place in the Senate for the foreseeable future, meaning most legislation requires bipartisan support to pass, neither side is likely to be able to pass any legislation on those fronts.

Our top tabs

Instagram head Adam Mosseri has agreed to testify before Congress

Mosseri is expected to testify before a Senate panel looking into online children’s safety next week, the New York Times’s Ryan Mac and Cecilia Kang report. It will be Mosseri’s first time testifying under oath on Capitol Hill.

The hearing comes around two months after Facebook whistleblower Frances Haugen testified before a Senate Commerce Committee panel. Haugen has disclosed internal Facebook documents detailing the harms of the company’s platforms, including Instagram.

Sen. Richard Blumenthal (D-Conn.), who leads the committee’s consumer protection subcommittee, plans to question Mosseri on how Instagram’s algorithms can lead children to go down rabbit holes with harmful content, Mac and Kang report. He also plans to press Mosseri to commit to making Instagram’s ranking and recommendation algorithms transparent, they report.

“We continue to work with the committee to find a date for Adam to testify on the important steps Instagram is taking,” Dani Lever, a spokeswoman for Instagram parent Meta, told the Times.

Europe’s top antitrust enforcer wants lawmakers to quickly agree to legislation targeting Big Tech

European Commissioner for Competition Margrethe Vestager is pressing the European Parliament and European Council to back the rules even if they’re not perfect, the Financial Times’s Javier Espinoza reports

Vestager’s encouragement “comes after almost a year of discussions among EU regulators and legislators, who have struggled to agree on the fine print of the Digital Markets Act (DMA) and the Digital Services Act (DSA),” Espinoza writes. Those legislative packages aim to target gatekeepers and improve online safety. A committee in the European Parliament approved a draft of the DMA last week.

“It’s important that everyone realizes that it is best to get 80 percent now than 100 percent never,” Vestager said. “This is another way of saying that perfect should not be the enemy of very, very good.”

Australia’s prime minister proposed legislation to force social media companies to reveal trolls’ identities

Under the proposal, social media companies would have to pay the price for the defamation if they don’t hand over the trolls’ identities, the Verge’s Emma Roth writes

ABC News Australia’s Tom Lowrey has more details about the proposal by Australian Prime Minister Scott Morrison:

  • Social media companies would have to set up complaints systems for users to ask other users to take down potentially defamatory content.
  • If the user doesn’t take down a potentially defamatory post that has been reported — or if the person behind the complaint wants to sue — the social media company asks the person behind the post for consent to provide their personal information to the person behind the complaint.
  • If they don’t consent, a court could order the social media company to release the information.

A draft of the proposal is expected to be released this week, Lowrey reports. The legislation will probably be introduced in Australia’s parliament early next year. 

Rant and rave

Instagram head Adam Mosseri confirmed in a video that he is planning to discuss children's safety issues with Congress soon. The New York Times's Cecilia Kang:

It looks like Mosseri will testify before Congress before YouTube CEO Susan Wojcicki. Harvard Law School lecturer Evelyn Douek

Others focused on the discourse about teenage girls and eating disorders. Gizmodo's Shoshana Wodinsky:

Workforce report

Competition watch

Trending

Daybook

  • The Senate Commerce Committee holds a nomination hearing for Gigi Sohn, President Biden’s pick to be a Federal Communications Commissioner, and Alan Davidson, Biden’s pick to be assistant secretary for communications and information at the National Telecommunications and Information Administration, on Wednesday at 10:15 a.m.
  • The House Energy and Commerce Committee’s communications and technology committee holds a hearing on proposals taking aim at Big Tech’s legal immunity on Wednesday at 10:30 a.m.
  • The House Science Committee holds a hearing on microelectronics on Thursday at 10 a.m.

Before you log off

Thats all for today — thank you so much for joining us! Make sure to tell others to subscribe to The Technology 202 here. Get in touch with tips, feedback or greetings on Twitter or email