The Washington PostDemocracy Dies in Darkness

Google Maps’ White House glitch, Flickr auto-tag, and the case of the racist algorithm

The Google Maps view of the White House was briefly labeled with a racist slur. (EPA/Olivier Douliery; Google)

When Flickr rolled out image recognition two weeks ago, it flaunted the tool as a major breakthrough in the world of online photos. There was just one, itty bitty problem: It sometimes tagged black people as “apes” or “animals.” And it slapped the label “jungle gym” on a picture of the Dachau concentration camp.

These aren’t human errors: They are, in essence, made by a machine. And if you look around the Internet, you’ll notice these algorithmic offenses happen … pretty frequently.

In 2013, research from Harvard found that Google ads for arrest records appear more frequently when you search more ethnic names.

Last year, the think tank Robinson + Yu warned that financial algorithms used in the mortgage industry frequently treated white and minority homebuyers differently. (It’s a criticism that’s also been made of Chicago’s predictive crime technology.)

In Britain, a female pediatrician made international news when she was barred from entering a women’s locker room because her gym’s security system automatically coded all “doctors” as male.

And just Tuesday, my colleague Brian Fung uncovered a pretty appalling error on Google Maps: Someone vandalized the location listing for the White House, adding a racial expletive. No one — human or machine! — flagged it during Google’s ensuing review process. (That edit was, to be clear, made by a person, but Google uses human moderators and automated systems to review these kinds of changes.)

[If you search Google Maps for the N-word, it gives you the White House]

What’s going on here, exactly? How does a system of equations — unfeeling, inert math — adopt such human biases? After all, no one at Google or Flickr intentionally programs their algorithms to be racist.

They do, however, program these systems to learn from human behavior and to adapt to it. And on the whole, unfortunately: People are racist.

Take the case of eEffective, a digital ad firm. Last year, the company’s managing director, Nate Carter, was disturbed to see that his algorithm, given the choice of an ad with a white kid and a black kid, kept surfacing white children. He hadn’t planned it that way: All the algorithm was supposed to do was track which ad people clicked, and serve that ad up more.

But given the choice, people clicked the white kid. So the algorithm, which is essentially color-blind, kept displaying it.

“[It] made me wonder, are we racist?” Carter wrote, in a later essay. “Had our racism poisoned my algorithm and turned it into a monster?”

It’s a valid question, and one that both technologists and sociologists are still working out — particularly when it comes to larger, more complex algorithmic systems, whose biases and consequences are harder to suss out. A new field of study, called algorithmic auditing, attempts to probe these systems and determine where bias is introduced. Meanwhile, Flickr has promised that it’s “working on a fix,” and Google has suspended map editing until the company can better moderate it.

Whether they succeed or fail, it raises some fascinating existential questions about technology. Like: Do we really want machines to “learn” from us? Maybe not, honestly.

Liked that? Try these!