The use of facial recognition technology is widespread throughout the federal government, and many agencies do not even know which systems they are using. That needs to change, the federal government’s main watchdog said in a new report.
“Thirteen federal agencies do not have awareness of what non-federal systems with facial recognition technology are used by employees,” the report said. “These agencies have therefore not fully assessed the potential risks of using these systems, such as risks related to privacy and accuracy.”
The report recommends agencies enact controls to make sure they know what systems their employees are using.
Algorithms that can search through massive databases of photos and pick out matches have been used by law enforcement groups for years. The technology remains controversial, with studies showing that the systems are worse at identifying Black and Brown faces than they are at matching White people.
In April, a Black man from Michigan sued Detroit police after their facial recognition system wrongly identified him as a shoplifter caught on video and arrested him. Several counties and cities, including San Francisco and Portland, Ore., have barred local police from using facial recognition, and Amazon has extended a moratorium on selling Rekognition to law enforcement agencies.
The GAO report sheds light on how deeply the technology has become integrated into how federal agencies work.
“Why on earth does the Fish and Wildlife Service need a facial recognition database? Why does the IRS? A lot of these agencies really have no need for this technology,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a group that advocates against the discriminatory use of surveillance tools. “These agencies are effectively normalizing this technology as a permanent part of our infrastructure. And we have no real guidance or protections.”
Six agencies, including the U.S. Park Police and the FBI, said they had used facial recognition on people who participated in protests after the killing of George Floyd by a Minneapolis police officer in May 2020. The agencies said they used it only on people they suspected of breaking the law, according to the report. The U.S. Capitol Police used Clearview AI to conduct its investigation of the Jan. 6 attack on the Capitol. Customs and Border Protection and the State Department said they ran searches for Capitol rioters on their own databases at the request of other federal agencies.
Facial recognition has been used heavily for non-law-enforcement reasons, too. The Secret Service tested using the tech for checking who was entering and leaving the White House but told the GAO it ended up abandoning the idea. The Transportation Security Authority is testing using facial recognition to verify the identify of travelers at airports. And federal court officers used it to verify the identity of people under court-ordered supervision, to avoid in-person contact during the coronavirus pandemic.
“One of our, you know, really strong concerns throughout has been that we would see the proliferation of invasive and biased technology to respond to a public health threat,” Cahn said.
The government should go well beyond the GAO’s recommendation to enact more controls, Cahn said, and institute a moratorium on using facial recognition.
The GAO report also shows the breadth of facial recognition databases that government agencies and private companies have access to.
Government databases vary, from the Federal Bureau of Prisons’ collection of 8,000 employee and contractor photos to the Office of Biometric and Identity Management’s massive trove of 836 million passport, mug shot and visa application pictures. Private companies’ databases are even bigger. Clearview AI claims its system has 3 billion images that it scraped from the Internet, including from Facebook and other social media profiles.
It is likely that every American is represented multiple times over in several collections, owned by both private and government entities.