The Washington PostDemocracy Dies in Darkness

Tech’s sexism doesn’t stay in Silicon Valley. It’s in the products you use.

Google has fired the engineer behind an anti-diversity memo that reignited heated debate over treatment of women in Silicon Valley. (Video: Reuters)

It was a rough weekend at Google.

On Friday, a 10-page memo titled “Google’s Ideological Echo Chamber” started circulating on the company’s internal networks, arguing that the disparities between men and women in tech and leadership roles were rooted in biology, not bias. By Saturday afternoon, the tech news site Gizmodo had obtained and published the entire thing. The story blew up.

The author, a male software engineer, argued that women were more neurotic and less stress-tolerant than men; that they were less likely to pursue status than men; that they were less interested in the “systematizing” work of programming. “We need to stop assuming that gender gaps imply sexism,” he concluded before offering recommendations. Those included demanding that Google “de-emphasize empathy,” that it stop training people on micro-aggressions and sensitivity, and that it cancel any program aimed specifically at advancing or recruiting women or people of color.

The memo was reductive, hurtful and laced with assumption. It was also unsurprising.

Google CEO slams anti-diversity memo on gender and race

We’ve heard lots about Silicon Valley’s toxic culture this summer — its harassing venture capitalists, its man-child CEOs, its abusive nondisparagement agreements. Those stories have focused on how that culture harms those in the industry — the women and people of color who’ve been patronized, passed over, pushed out and, in this latest case, told they’re biologically less capable of doing the work in the first place. But what happens in Silicon Valley doesn’t stay in Silicon Valley. It comes into our homes and onto our screens, affecting all of us who use technology, not just those who make it.

Take Apple Health, which promised to monitor “your whole health picture” when it launched in 2014. The app could track your exercise habits, your blood alcohol content and even your chromium intake. But for a full year after launch, it couldn’t track one of the most common human health concerns: menstruation.

And consider smartphone assistants such as Cortana and Siri. In 2016, researchers from JAMA Internal Medicine noted that these services couldn’t understand phrases such as “I was raped” or “I was beaten up by my husband” — and, even worse, would often respond to queries they didn’t understand with jokes.

Then there’s Snapchat. Last year, on April 20 (otherwise known as “4/20,” a holiday of sorts for marijuana fans), the app launched a new photo filter: “Bob Marley,” which applied dreadlocks and darkened skin tones to users’ selfies. The filter was roundly criticized as “digital blackface,” but Snapchat refused to apologize. In fact, just a few months later, it launched another racially offensive filter — this one morphing people’s faces into Asian caricatures replete with buckteeth, squinty eyes and red cheeks.

A Google engineer argued that women may be unsuited for tech jobs. Women wrote back.

It’s bad enough for apps to showcase sexist or racially tone-deaf jokes or biases. But in many cases, those same biases are also embedded somewhere much more sinister — in the powerful (yet invisible) algorithms behind much of today’s software.

For a simple example, look at FaceApp, which came under fire this spring for its “hotness” photo filter. The filter smoothed wrinkles, slimmed cheeks — and dramatically whitened skin. The company behind the app acknowledged that the filter’s algorithm had been trained using a biased data set — meaning the algorithm had learned what beauty was from faces that were predominantly white.

Likewise, in 2015, Google launched a new image-recognition feature for its Photos app. The feature would trawl through users’ photos, identify their contents, and automatically add labels to them — such as “dog,” “graduation” or “bicycle.” But Brooklyn resident Jacky Alciné noticed a more upsetting tag: A whole series of photos of him and a friend, both black, was labeled with the word “Gorillas.” The racial slur wasn’t intentional, of course. It was simply that the system wasn’t as good at identifying black people as it was white people. After the incident, Google engineers acknowledged this, calling for product improvements focused on “better recognition of dark-skinned faces.”

Then there’s Word2vec, a neural network Google researchers created in 2013 to assist with natural language processing­ — that is, computers’ ability to understand human speech. The researchers built Word2vec by training a program to comb through Google News articles and learn about the relationships between words. Millions of words later, the program can complete analogies such as “Paris is to France as Tokyo is to _____.” But Word2vec also returns other kinds of relationships, such as “Man is to woman as computer programmer is to homemaker,” or “Man is to architect as woman is to interior designer.”

Outrage at sexism used to be my job. These days, it’s hard to muster the energy.

These pairings aren’t surprising — they simply reflect the Google News data set the network was built on. But in an industry where white men are the norm and “disruption” trumps all else, technology such as Word2vec is often assumed to be objective and then embedded into all sorts of other software, whether it’s recommendation engines or job-search systems. Kathryn Hume, of artificial-intelligence company Integrate.ai, calls this the “time warp” of AI: “Capturing trends in human behavior from our near past and projecting them into our near future.” The effects are far-reaching. Study after study has shown that biased machine-learning systems result in everything from job-search ads that show women lower-paying positions than men to predictive-policing software that perpetuates disparities in communities of color.

Some of these flaws might seem small­. But together, they paint a picture of an industry that’s out of touch with the people who actually use its products. And without a fundamental overhaul to the way Silicon Valley works — to who gets funded, who gets hired, who gets promoted and who is believed when abuses happen — it’s going to stay that way. That’s why calls to get rid of programs targeted at attracting and supporting diverse tech workers are so misguided. The sooner we stop letting tech get away with being insular, inequitable and hostile to diversity, the sooner we’ll start building technology that works for all of us.

Loading...