The Washington PostDemocracy Dies in Darkness

Opinion Social media algorithms determine what we see. But we can’t see them.

The YouTube logo at the YouTube Space in Los Angeles in 2015. (Lucy Nicholson/Reuters)

Parents, professors and plenty of politicians disapprove of the content that YouTube serves up to its billions of users every day. Who else disapproves, according to a study published in July? YouTube itself.

A crowdsourced report by the Mozilla Foundation catalogues content on the platform that volunteer users considered “regrettable.” The takeaways: Seventy-one percent of the videos flagged were recommended to the users by the platform’s algorithmic system, and some of those videos violated YouTube’s own policies or came close to violating them. Sometimes, the troubling material wasn’t even related to the previous videos a user was watching — of special concern amid anecdotes of viewers following these recommendations all the way down the so-called YouTube rabbit hole of radicalization.

Because 70 percent of all video views on YouTube are algorithmically recommended, these things matter. Because an estimated 700 million hours of video are watched on YouTube every day, they matter a lot.

YouTube, unsurprisingly, takes issue with these findings. A second study released in August looked at viewership trends and found no evidence that recommendations were driving users to ever more radical content. Instead, people seemed mostly to have found their way to far-right videos from far-right websites they already frequented. And the term “regrettable,” on which the study relies, is fuzzy: One person’s regret is another’s niche interest, and while the researchers point to misinformation, racism and a sexualized “Toy Story” parody, YouTube itself notes that the flagged material also included videos as innocuous as DIY crafts and pottery-making tutorials. YouTube also argues that the paper’s determinations about rule violations are based only on the researchers’ interpretation of its rules, rather than the company’s.

So who’s right? The inability to answer that question is at the core of the problem. YouTube boasts that its efforts to reduce the recommendation of “borderline content” have resulted in a 70 percent decrease in watch-time of those videos that skirt the terms of service — an implicit acknowledgment that the engagement incentives of the recommender algorithm clash with the safety incentives of the content moderation algorithm that seeks to stamp out harmful material before users see it. What exactly borderline content is, however, remains unclear to the general public, as well as to those researchers who decided, to the platform’s consternation, to guess. The lack of transparency surrounding what the algorithm does recommend, to whom it recommends it and why also means that surveys like this report are one of the few ways even to attempt to understand the workings of a powerful tool of influence.

Lawmakers already are considering regulations to prompt platforms to open up the black boxes of their algorithms to outside scrutiny — or at least to provide aggregated data sets about the outcomes those algorithms produce. These latest studies, however, drive home a critical truth: Users themselves deserve to understand better how platforms curate their personal libraries of information, and they deserve more control to curate for themselves.

Read more:

Ruth Marcus: Trump’s coup attempt grows even more worrisome as new details emerge

Fred Hiatt: The U.S. government is designed for failure. And, a new study shows, it’s getting worse.

Jennifer Rubin: A few sane Republicans try to prevent unnecessary covid deaths

Loading...