with Aaron Schaffer

House lawmakers are probing the advertising practices on YouTube Kids, as there’s a growing focus on online child privacy on Capitol Hill. 

House Oversight Committee member Raja Krishnamoorthi (D-Ill.) said in a letter to YouTube CEO Susan Wojcicki that the company is not doing enough to protect children from marketing and excessive screen time on its child-focused service. He accused the company of offering children a "non-stop stream of low-quality, commercial content.”

“YouTube Kids serves an audience of children, but it appears to be serving up inappropriate, low-education, highly commercial content,” Krishnamoorthi, who chairs the Oversight subcommittee on economic and consumer policy, said in a news release.  “YouTube profits from this disservice of children with more paid ads and more corporate revenue.”

The increased scrutiny comes after YouTube made significant changes to content for children last year as part of an effort to satisfy the Federal Trade Commission, which in 2019 fined the company tens of millions of dollars over alleged children's privacy violations.

“After the mandated changes, ads may be reaching children in other concerning manners,” Krishnamoorthi wrote. “It appears that a high volume of ‘made for kids’ videos are smuggling in hidden marketing and advertising with product placements by children’s influencers.”

Lawmakers are increasingly zeroing in on Silicon Valley’s efforts to build services for young children. 

Members of Congress from both parties have repeatedly raised concerns about a growing number of services that they say are aimed at hooking children on addictive tech products from a young age, potentially adding momentum to expand a decades-old privacy law that aims to protect children online. House lawmakers repeatedly drilled into the issue during last month’s high-profile hearings with the CEOs of Facebook, Google and Twitter. 

In Tuesday's letter to YouTube, Krishnamoorthi sent a detailed list of demands for documents and information that he's calling YouTube to turn over to the committee by April 20. He wants copies of the top 100 YouTube Kids paid ads, as well as identification of the advertiser and the revenue generated for each ad. He also called for the company to turn over details about how many videos it had to take down from the service because they were found to be inappropriate for children. He also wants documents that explain how the company has designed the children's service. 

The scrutiny of YouTube Kids follows Facebook CEO Mark Zuckerberg’s admission at that hearing that the company is building an Instagram for kids, which was first reported by BuzzFeed News. A group of Democratic lawmakers sent a letter to Facebook earlier this week demanding more details about the efforts to build that service. 

YouTube maintains that it has made significant investments to protect children’s privacy. 

“Over the last few years, we’ve worked hard to provide kids and families with protections and controls that enable them to view age-appropriate content,” YouTube spokeswoman Ivy Choi said in a statement. “We’ve made significant investments in the YouTube Kids app to make it safer and to serve more educational and enriching content for kids, based on principles developed with experts and parents.”

Choi said that the company does not serve personalized ads alongside content that is “made for kids,” and it has taken steps to ensure the content is age appropriate. 

Our top tabs

YouTube released data to show it's removing more hate speech, but researchers say it still has room to improve. 

The fairly low amount of views of videos that broke company policies in the fourth quarter between 0.16 percent and 0.18 percent — still potentially amounts to millions of views, Gerrit De Vynck reports. The new numbers, which rely on a sample of purportedly representative data on the platform, came as the company — a revenue driver for Google — came under fire for misinformation on the platform about issues such as the coronavirus and baseless claims that the 2020 election was rigged.

“My top priority, YouTube’s top priority, is living up to our responsibility as a global platform,” said Neal Mohan, YouTube’s chief product officer. “And this is one of the most salient metrics in that bucket.” 

But critics say the company and its competitors aren’t doing enough to ban repeat offenders and work with each other to find rule-breaking content that pops up on multiple platforms.

Law enforcement officials across the country used facial recognition software, raising questions about privacy and surveillance.

More than 7,000 people from nearly 2,000 public agencies nationwide have used Clearview AI software to search for everyone from protesters to their own friends and family members, BuzzFeed News’s Ryan Mac, Caroline Haskins, Brianna Sacks and Logan McDonald report. That's raising the ire of privacy hawks in Congress. 

“Americans shouldn’t have to rely on BuzzFeed to learn their local law enforcement agencies were using flawed facial recognition technology,” Sen. Ron Wyden, an Oregon Democrat, told BuzzFeed. “This report pulls back the curtain on Clearview’s shady campaign to encourage the secret adoption of its service by local police. Inaccurate facial recognition will inevitably cause innocent people to be wrongly accused and convicted of crimes and could very well lead to tragedies.”

In many cases, departments said that they did not know that employees were using the software, which scans billions of images scraped from social media sites, from 2018 to 2020. Often, trial versions of the software were pitched to police officials with little oversight.

Clearview AI co-founder and CEO Hoan Ton-That said it was “gratifying to see how quickly Clearview AI has been embraced by U.S. law enforcement,” though he declined to answer more than 50 questions about law enforcement use of the technology and the company’s practices.

A top Google AI research manager resigned amid criticism of the company’s treatment of his colleagues.

Samy Bengio managed hundreds of researchers and was considered an ally to ethical AI co-leads Timnit Gebru and Margaret Mitchell, who recently were controversially ousted from the company, Nico Grant, Josh Eidelson and Dina Bass report. Bengio announced his departure in an email to staff. 

“I learned so much with all of you, in terms of machine learning research of course, but also on how difficult yet important it is to organize a large team of researchers so as to promote long term ambitious research, exploration, rigor, diversity and inclusion,” Bengio wrote, according to Reuters. His email said that he’s leaving to pursue “other exciting opportunities.”

Bengio’s email did not mention the departures of Gebru or Mitchell, according to Bloomberg News. Google declined to comment.

Rant and rave

Gebru described how sad she felt about Bengio's departure:

El Mahdi El Mhamdi, a scientist at Google Brain, said Bengio's resignation is a big loss for the company:

Google Ethical AI sociologist and senior research scientist Alex Hanna:

Pinboard's Maciej Cegłowski:

Competition watch

Trending

Mentions

  • President Biden plans to nominate Robin Carnahan, the former Missouri secretary of state who previously worked for General Services Administration's digital office 18F, as the GSA’s administrator.

Daybook

  • New America hosts a webinar on the digital divide in India, Pakistan and other countries today at 11:30 a.m.
  • Catherine Creese, the director of the US Naval Seafloor Cable Protection Office, speaks at a Center for Strategic and International Studies event on underwater communication cables on Thursday at 1 p.m.
  • Google executive Karan Bhatia speaks at an event hosted by the Center for Strategic and International Studies on Friday at 10 a.m. 

Before you log off