With that in mind, two researchers from Stanford University decided to study how well artificial intelligence could identify people’s sexual orientation based on their faces alone. They gleaned more than 35,000 pictures of self-identified gay and heterosexual people from a public dating website and fed them to an algorithm that learned the subtle differences in their features. They then showed the software randomly selected face pictures and asked it to guess whether the people in them were gay or heterosexual.
The results were unsettling. According to the study, first published last week, the algorithm was able to correctly distinguish between gay and heterosexual men 81 percent of the time, and gay and heterosexual women 71 percent of the time, far outperforming human judges. Given the prevalence of such technology, the researchers wrote, “our findings expose a threat to the privacy and safety of gay men and women.”
Now, however, two prominent LGBT advocacy groups are denouncing the study as “junk science.” Far from protecting the LGBT community, they say, it could be used as a weapon against gay and lesbian people as well as heterosexuals who could be inaccurately “outed” as gay. The researchers, in turn, have issued multiple lengthy defenses of their work and said they are the victims of a “smear campaign.”
“Imagine for a moment the potential consequences if this flawed research were used to support a brutal regime’s efforts to identify and/or persecute people they believed to be gay,” said HRC’s Ashland Johnson, director of public education and research. “Stanford should distance itself from such junk science.”
The groups pointed to several limitations in the study that they said undermined its conclusions. For example, they said, the researchers didn’t look at nonwhite people, didn’t independently verify information such as age and sexual orientation, and examined “superficial characteristics” such as weight, hairstyle and facial expression. The groups also said they brought up their concerns with the researchers to no avail.
“Technology cannot identify someone’s sexual orientation,” GLAAD Chief Digital Officer Jim Halloran said. “This research isn’t science or news, but it’s a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community.”
Kosinski and Wang released a pair of detailed written responses Sunday and Monday calling the groups’ reaction “knee-jerk.”
In short, they said, GLAAD and HRC didn’t seem to have read their work in full and didn’t attempt to understand the science behind it.
“It really saddens us that the LGBTQ rights groups, HRC and GLAAD, who strived for so many years to protect the rights of the oppressed, are now engaged in a smear campaign against us with a real gusto,” read a statement from the researchers. The groups’ news release was “full of counterfactual statements,” they said.
The study, which was peer reviewed and accepted for publication in the Journal of Personality and Social Psychology, found that an algorithm could differentiate between gay and heterosexual men and women most of the time using a single photograph. When the algorithm was given five images of the same person, the accuracy increased to 91 percent for men and 83 percent for women, according to the results. Human judges, on the other hand, could only get it right 61 percent of the time for men and 54 percent of the time for women — not much better than random guessing.
The researchers found that facial morphology, expressions and grooming styles were all reliable predictors for whether a person was gay or straight. They said certain gender-atypical features — including narrower jaws among gay men and larger jaws among lesbians — may be linked to different levels of hormone exposure in the womb.
Many of GLAAD and HRC’s concerns about the findings were addressed in the study itself, which discussed the limitations of the research, according to Kosinski and Wang. Only white men and women were used in the study, they noted, because they couldn’t find sufficient numbers of nonwhite subjects. They added that they tried to verify personal details such as age and sexual orientation, and dismissed criticisms that their pool of subjects was too narrow. Such shortcomings didn’t invalidate the findings, they said.
Kosinski and Wang also argued that their study had an important social value. They said they were concerned about publishing their results given the risks to privacy but decided to do so anyway to raise awareness about the dangers presented by misuse of such technology.
“We did not build a privacy-invading tool,” they wrote in a summary of their findings Sunday. “We studied existing technologies, already widely used by companies and governments, to see whether they present a risk to the privacy of LGBTQ individuals. We were terrified to find that they do.”
“Let’s be clear: our paper can be wrong,” the researchers added. “In fact, despite evidence to the contrary, we hope that it is wrong. But only replication and science can debunk it — not spin doctors.”
The backlash against the study wasn’t limited to GLAAD and HRC. Kosinski wrote that he received emails telling him to kill himself and comparing his work to the Holocaust. Others on social media were no more charitable.
Alex Bollinger, a writer at LGBTQ Nation, came to the researchers’ defense. In a post Sunday, he wrote that while the study was not a “complete picture of what LGBTQ people look like,” there was no reason to reject it outright.
“I honestly don’t know why HRC and GLAAD have such a problem with this paper. Part of it is probably the science illiteracy evident in their statement, as well as a lack of familiarity with how research works,” Bollinger wrote. “This is just one study that looked at one sample and said a few things. There will be more studies later on that will say other things. Let’s see how that all unfolds before deciding what the correct answer is.”
More from Morning Mix