Columbia University professor Alondra Nelson tweeted, “We must stop confusing ‘inclusion’ in more ‘diverse’ surveillance systems with justice and equality.”
Today’s facial-recognition systems more often misidentify people of color because of a long-running data problem: The massive sets of facial images they train on skew heavily toward white men. A Massachusetts Institute of Technology study this year of the face-recognition systems designed by Microsoft, IBM and the China-based Face++ found their accuracy in classifying a person’s gender was 99 percent for light-skinned males and 70 percent for dark-skinned females.
In a project that debuted Thursday, Joy Buolamwini, an artificial-intelligence researcher at the MIT Media Lab, showed facial-recognition systems consistently giving the wrong gender for famous women of color, including Oprah, Serena Williams, Michelle Obama and Shirley Chisholm, the first black female member of Congress. “Can machines ever see our grandmothers as we knew them?” Buolamwini asked.
The companies have responded in recent months by pouring many more photos into the mix, hoping to train the systems to better tell the differences among more than just white faces. IBM said Wednesday it used 1 million facial images, taken from the photo-sharing site Flickr, to build the “world’s largest facial dataset,” which it will release publicly for other companies to use.
Both IBM and Microsoft said that allowed its systems to recognize gender and skin tone with much more precision. Microsoft said its improved system had reduced the error rates for darker-skinned men and women by “up to 20 times,” and reduced error rates for all women by nine times. The company did not define a baseline for that reduction or give an estimate of accuracy, which can vary widely depending on factors such as image quality.
Those improvements were heralded by some for taking aim at the prejudices in a rapidly spreading technology, including potentially reducing the kinds of false positives that could lead police officers to misidentify a criminal suspect. Clare Garvie, an associate at Georgetown Law’s Center on Privacy and Technology, said, “Any effort by companies to make their systems more equitable and accurate across demographics can only be a good thing.”
But others suggested the technology’s increasing accuracy could also make it more marketable. The systems should be accurate, “but that’s just the beginning, not the end, of their ethical obligation,” said David Robinson, managing director of the think tank Upturn, which co-signed a letter in April calling face recognition “categorically unethical to deploy.”
Face recognition’s promise of a simple, long-range identification system has made it a compelling tool for criminal justice, private security and mass surveillance. But for the companies racing to develop and sell it, the technology can also function as a double-edged sword, in which pushes to refine its capabilities can be seen as potentially dangerous or morally fraught.
At the center of that debate is Microsoft, whose multimillion-dollar contracts with ICE came under fire amid the agency’s separations of migrant parents and children at the Mexican border.
Face recognition is a core feature of Azure Government, the cloud-computing system Microsoft has promoted to ICE and other agencies as a way to efficiently process lots of data and tap artificial-intelligence applications such as image analysis and real-time translation.
In an open letter to Microsoft chief executive Satya Nadella urging the company to cancel that contract, Microsoft workers pointed to a company blog post in January that said Azure Government would help ICE “accelerate facial recognition and identification.” “We believe that Microsoft must take an ethical stand, and put children and families above profits,” the letter said.
A Microsoft spokesperson, pointing to a statement last week from Nadella, said the company’s “current cloud engagement” with ICE supports relatively anodyne office work such as “mail, calendar, messaging and document management workloads.” The company said in a statement that its facial-recognition improvements are “part of our ongoing work to address the industry-wide and societal issues on bias.”
ICE has voiced interest in expanding deployment of artificial-intelligence features such as face recognition and behavior-prediction algorithms to crack down on immigration offenses and pursue the Trump administration’s stated goal of “extreme vetting” foreign visitors. Federal agents and police officers in airports and along the borders currently use face recognition to identify people or match potential fugitives.
Criticism of face recognition will likely expand as the technology finds its way into more arenas, including airports, stores and schools. The Orlando police said this week it would not renew its use of Amazon’s Rekognition system following criticism of the technology’s effects on privacy and civil rights.
Companies “have to acknowledge their moral involvement in the downstream use of their technology,” Robinson said. “The impulse is that they’re going to put a product out there and wash their hands of the consequences. That’s unacceptable.”