Facial-recognition technology endured fierce resistance in Washington on Wednesday as both Democratic and Republican lawmakers criticized the artificial-intelligence software as a danger to Americans’ privacy and civil rights.
At a time when most issues in Washington generate a starkly partisan divide, members of the House Oversight and Reform Committee were startlingly bipartisan in their condemnation of the technology, which federal and local law-enforcement agencies already are using across the country to identify suspects caught on camera.
Members blasted the largely unregulated technology as inaccurate, invasive and having potentially chilling effects on Americans’ privacy and free expression rights. Several voiced support for passing federal laws to restrain the technology’s use before, as Rep. Mark Meadows (R-N.C.) said, “it gets out of control.”
Others voiced worries about the technology being used in the United States as it is in China, where it is critical to the government’s systems of public monitoring and social control.
Committee chairman Elijah E. Cummings (D-Md.) said “there’s a lot of agreement” among lawmakers that the technology should be regulated. The question, he said, is whether the systems should face a moratorium while the technology is assessed or refined, or whether it should banned outright.
The committee’s ranking Republican, Rep. Jim Jordan (Ohio), compared the technology to Big Brother in the dystopian George Orwell novel “1984” and said it threatened Americans’ First and Fourth Amendment rights covering free speech and protections against unreasonable searches.
“Seems to me it’s time for a timeout,” he said. “Doesn’t matter what side of the political spectrum you’re on, this should concern us all.”
The technology’s higher rate of inaccuracies when scanning people of color — as shown in research led by Joy Buolamwini, an artificial intelligence researcher for the MIT Media Lab who testified at the hearing — also led some lawmakers to question more generally the lack of racial diversity in the American tech industry.
“We have a technology that was created and designed by one demographic, that is only mostly effective on that one demographic, and they’re trying to sell it and impose it on the entirety of the country,” Rep. Alexandria Ocasio-Cortez (D-N.Y.) said.
Daniel Castro, vice president at the Information Technology and Innovation Foundation, an industry-backed think tank, said in a statement Wednesday that the calls for bans or moratoriums on how police use the technology “are misguided and will only undercut efforts to make police agencies more efficient and effective in protecting local communities.”
The group has urged policymakers to “focus on a balanced approach” that would implement additional testing and oversight while still allowing police to use it while investigating crimes.
The technology has faced intensifying pressure over its potential for misidentification and abuse. San Francisco last week became the first city in the United States to ban facial-recognition use by local police and city agencies. Local lawmakers in California and Massachusetts are considering similar measures.
The hearing came as the Trump administration considers a possible blacklist of Chinese tech companies, such as the video-surveillance giant Hikvision, that have developed facial-recognition software and other technologies used in the monitoring and detention of Muslim Uighurs in China’s Xinjiang region.
Not even Amazon, which has developed a system used by police called Rekognition, has escaped internal questions about the technology. During its annual shareholders’ meeting Wednesday, investors were asked to vote on two proposals that would have demanded further study of the technology’s potential human-rights risks or prevented the company from selling it to government agencies. (Amazon founder and chief executive Jeff Bezos also owns The Washington Post.)
Both shareholder proposals failed. The company said it will reveal exact vote figures later this week.
But Amazon, too, said it supports calls for an “appropriate national legislative framework” restricting the technology’s police and government use. Matt Wood, the general manager of artificial intelligence for Amazon Web Services, said in a statement that the technology can “materially benefit society,” and has been used to identify victims of human trafficking.
“We remain committed to working with Congress to ensure the protection of civil liberties while promoting transparency and accountability in the use of facial recognition technology,” Wood said.
Jake Laperruque, senior counsel for the Constitution Project at the watchdog group Project On Government Oversight, said the hearing “showed a strong bipartisan support for limiting facial recognition surveillance, and doing so promptly. Unrestricted facial recognition is widespread and affects hundreds of millions of Americans, but it is clearly not sustainable.“