(Amy Cavenaile/The Washington Post; iStock)

Only rarely has Facebook, the media monolith that commands the eyeballs of some 1.65 billion people, faced the level and volume of scrutiny that it’s facing right now.

Following a pair of Gizmodo reports that raised questions about Facebook employees’ control over the site’s influential “trending” news stories, everyone from pundit S.E. Cupp to the chairman of the Senate Commerce Committee has demanded that Facebook pull back the curtain and reveal who is surfacing these things.

The scrutiny is well-deserved, given Facebook’s profound influence over what and how the world reads. But interviews with former Facebook curators and a survey of Facebook’s trending stories suggest the site’s critics are misdirecting their concerns.

While we’re digging into the black box that is Facebook’s editorial operation, we ought to spend a lot more time on its algorithms.

“These algorithms, these AIs, are influencing information, decisions, and pretty much everything in 21st-century society,” Jonathen Koren, a former engineer on Facebook’s trending team, wrote Thursday on LinkedIn. They’re the driving force behind Facebook’s personalized trending module, he adds. And they, like all algorithms, suffer major flaws and biases, often in ways so complex that their makers can’t untangle them.

By comparison, the interplay between Facebook’s trending algorithm and its human minders is relatively simple: According to editorial guidelines released by Facebook and confirmed by two former employees, the role of human curators is largely confined to basic production and copy-editing.

The algorithm, on the other hand — a catch-all phrase for a system of automated processes that make complex mathematical predictions, according to data we usually don’t know we’ve given out — identifies trending news stories, ranks and feeds them to curators, and chooses which users see which stories when they’re redistributed. In phrase one of the process, broadly speaking, a program monitors Facebook for upticks in shares and comments around specific topics, while an external RSS crawler simultaneously searches predesignated feeds for soon-to-trend breaking news stories that Facebook’s users haven’t begun sharing yet.

In phrase two, those topics appear in a ranked moderation queue, where curators verify that the news story is legit by confirming that three media websites on a list of 1,000 had reported it, and attach a headline, photo and metadata like place names and related subjects.

From there, another automated program pushes approved trends out into a leading homepage module on desktop, and into less prominent positions in Facebook’s mobile site and apps. Of the roughly 60 to 100 trends in circulation at any given time, individual users see less than 10 — their selection personalized, algorithmically, according to what Facebook vaguely terms “a number of factors.”

The site has traditionally been tight-lipped about how widely each topic trends, and whether some stories are prioritized over others. In light of the Gizmodo revelations, however, it’s become evident that this distribution is almost always automated according to site popularity — except in rare instances when curators attach an additional tag that marks a trend as an important national or international news story. Those tags could only be applied, former curators said, after five mainstream, priority outlets from a list of 10 had also reported on them.

In that way, even the humans operate rather like algorithms: they can only execute actions when specific, numerical conditions are met first. In the past week, Facebook has trotted that observation out several times as evidence that its methods are beyond reproach: Decisions are made largely “by our algorithms, not people,” Facebook’s vice president of global operations wrote, and are thus presumed to be objective and responsible.

But that, as many in the field have pointed out, is not how algorithms work. They necessarily replicate human biases and errors, even if they weren’t intended to.

One former curator told the Post, for instance, that she had never seen evidence of political bias on Facebook’s trending team, but that she suspected some stories were accidentally repressed because Facebook’s list of trusted outlets skewed leftwards. Similarly, in a show of good faith, Facebook published the configuration file behind its RSS-crawler and the list of 1,000 media sites curators use to check if a story is true. But those lists are outdated and inconsistent: They include dozens of sites like BettyCrocker.com, which do not report news, and exclude dozens more that do.

More concerning still is the mystery that surrounds the personalization of the Facebook trending module: which news does Facebook highlight for which people, and how does the algorithm make those decisions? To probe this issue, we conducted a nonscientific, self-selected survey of 117 Facebook users in 106 U.S. Zip codes, asking them to screenshot and submit their mobile trending module at three specific, random times over two days this week.

Consistent with Facebook’s guidelines to curators, we found that 80 to 100 individual trending topics were generally in play at any given time. A small handful of those topics — nonpartisan news and entertainment stories, often of the vaguely dopey kind you might expect on morning TV — are surfaced very widely. The remaining 80+ trending topics are surfaced only to specialized clusters of users.


Six of the 108 screenshots that were submitted to the Post between 5 and 7 p.m. on May 16. Note that “penis transplant” appears for almost everybody.

Our survey suggests, for instance, that Facebook may algorithmically surface different trending stories to men versus women, to mobile versus tablet users, and to residents of urban environments versus their counterparts in the suburbs — in addition to surfacing niche “trending” topics based on user’s stated interests.

News about the return of the Ford Ranger appeared in the trending topics of 12 participants, none of whom lived in major cities. (The car is apparently a big deal in Akron, Ohio, Greensboro, N.C. and Chattanooga, Tenn.) A participant who traveled from St. Paul to Chicago during the course of the two-day experiment saw her topics suddenly narrow to city-specific topics, like a Chicago food festival, from national and regional stories.

Stories about Amy Schumer and Beyonce trended far more among women than men, as did the account of a euthanized baby bison in Yellowstone National Park. When the Post created two identical dummy accounts — differing only in their gender — Facebook showed the Yellowstone story to the female account, and showed news about gaming platform Steam and the new Grand Theft Auto game to men.

“Liking” a sports team page, meanwhile, is a one-way ticket to the ESPN filter bubble: Numerous fans in our survey were served a slate of relatively unpopular sports updates, to the exclusion of even major news stories that actually trended more widely across the site.

The only story that was surfaced to an overwhelming majority of users — suggesting, one former curator told us, that it had been manually given a “major news story” tag — was the news of the world’s first successful penis transplant. Every other story appeared to have been surfaced by algorithm.

“The root of [Facebook’s] bias is in algorithms,” the sociologist Zeynep Tufekci, an outspoken critic of Facebook, wrote in the New York Times Thursday. “The first step forward is for Facebook, and anyone who uses algorithms in subjective decision making, to drop the pretense that they are neutral.”

This, alas, is not an issue we should expect to Congress — or anyone else — to address any time soon. Unlike human bias, which is readily apparent and easily understood, algorithmic bias is complex, murky and typically inadvertent.

Thanks to the advent of machine learning and the sheer amount of data that Facebook possesses about its users, it’s very likely that even engineers at Facebook couldn’t explain precisely why the trending box displays different stories to men and women, or why minor news about the Kansas City Royals sometimes trends for a baseball-hating user in Boston. (“The things that the computer is taking into account when making a decision or prediction  may actually be pretty fine-grained,” Koren said, “or, in the case deep learning algorithms, not even be known or completely understood by the practitioners.”)

To me, at least, that feels slightly more ominous than the largely debunked allegation that some human curator didn’t rain down “importance” tags on right-wing news stories that weren’t also reported by the mainstream press. Certainly, it seems to bear more long-term significance: The curators will come and go, but we’re stuck with the algorithm.

Already, former curators told the Post, Facebook treats the members of its editorial team as if they were disposable: hiring and firing en masse, keeping them on short-term contracts, ordering them not to tell relatives about their jobs or to interact with Facebook’s full-time, in-house staff.

They believed — and they suspect Facebook also believes — that one day the algorithms will be totally self-contained and self-sufficient, and Facebook would have no more need of their department. In fact the original role of the trending team, Gizmodo has reported, was to train the algorithm to make even more editorial decisions.

How it will make those decisions, of course, none of us really know: We’ll just see the trends and presume they’re important because Facebook said so.

Abby Ohlheiser contributed reporting.

Liked that? Try these!