Parents, professors and plenty of politicians disapprove of the content that YouTube serves up to its billions of users every day. Who else disapproves, according to a study published in July? YouTube itself.
Because 70 percent of all video views on YouTube are algorithmically recommended, these things matter. Because an estimated 700 million hours of video are watched on YouTube every day, they matter a lot.
YouTube, unsurprisingly, takes issue with these findings. A second study released in August looked at viewership trends and found no evidence that recommendations were driving users to ever more radical content. Instead, people seemed mostly to have found their way to far-right videos from far-right websites they already frequented. And the term “regrettable,” on which the study relies, is fuzzy: One person’s regret is another’s niche interest, and while the researchers point to misinformation, racism and a sexualized “Toy Story” parody, YouTube itself notes that the flagged material also included videos as innocuous as DIY crafts and pottery-making tutorials. YouTube also argues that the paper’s determinations about rule violations are based only on the researchers’ interpretation of its rules, rather than the company’s.
So who’s right? The inability to answer that question is at the core of the problem. YouTube boasts that its efforts to reduce the recommendation of “borderline content” have resulted in a 70 percent decrease in watch-time of those videos that skirt the terms of service — an implicit acknowledgment that the engagement incentives of the recommender algorithm clash with the safety incentives of the content moderation algorithm that seeks to stamp out harmful material before users see it. What exactly borderline content is, however, remains unclear to the general public, as well as to those researchers who decided, to the platform’s consternation, to guess. The lack of transparency surrounding what the algorithm does recommend, to whom it recommends it and why also means that surveys like this report are one of the few ways even to attempt to understand the workings of a powerful tool of influence.
Lawmakers already are considering regulations to prompt platforms to open up the black boxes of their algorithms to outside scrutiny — or at least to provide aggregated data sets about the outcomes those algorithms produce. These latest studies, however, drive home a critical truth: Users themselves deserve to understand better how platforms curate their personal libraries of information, and they deserve more control to curate for themselves.