The federal government plans to expand its use of facial recognition to pursue criminals and scan for threats, an internal survey has found, even as concerns grow about the technology’s potential for contributing to improper surveillance and false arrests.
Most of the agencies use face-scanning technology so employees can unlock their phones and laptops or access buildings, though a growing number said they are using the software to track people and investigate crime. The Department of Agriculture, for instance, said it wants to use it to monitor live surveillance feeds at its facilities and send an alert if it spots any faces also found on a watch list.
The government’s expansion comes as major tech companies have pushed to stall law enforcement’s adoption of the software. Amazon said in May that it had stopped selling its facial recognition software Rekognition to U.S. police, citing the lack of federal laws governing how the software should be used. (Amazon founder Jeff Bezos owns The Washington Post.)
Three states — Virginia, Massachusetts and Maine — and more than a dozen cities, including Boston, Portland and San Francisco, have banned or restricted the technology’s use by public officials or police.
Representatives from both parties voiced concerns about the technology during a House Judiciary Committee hearing last month. And Sens. Ron Wyden (D-Ore.) and Rand Paul (R-Ky.) in April introduced a bill that would ban the government from using facial recognition systems that relied on data that had been “illegitimately obtained.”
Federal agencies need stricter limits on facial recognition to protect privacy, says government watchdog
Proponents say the software’s accuracy is improving and that it has played a critical role in helping track and identify major criminals. But the technology’s accuracy has been shown in research to vary wildly depending on the skin color of the person being surveilled. Facial recognition searches have been cited in at least three wrongful arrests, all of which were of Black men, and in the identification of protesters accused of violence during demonstrations over the murder of George Floyd.
“Even with all the privacy issues and accuracy problems, the government is pretty much saying, ‘Damn the torpedoes, full speed ahead,’" said Jake Laperruque, a senior counsel at the Project on Government Oversight, an independent watchdog group in Washington.
Many of the technology’s uses are commonplace or uncontroversial, including for secure-doorway access or to match a person’s face to their passport, Laperruque said. But some of its more pervasive law enforcement uses, he said, “present a really big surveillance threat that only Congress can solve.”
The GAO said in June that 20 federal agencies have used either internally developed or privately run facial recognition software, even though 13 of those agencies said they did not “have awareness” of which private systems they used and had therefore “not fully assessed the potential risks … to privacy and accuracy.”
In the current report, the GAO said several agencies, including the Justice Department, the Air Force and Immigration and Customs Enforcement, reported that they had used facial recognition software from Clearview AI, a firm that has faced lawsuits from privacy groups and legal demands from Google and Facebook after it copied billions of facial images from social media without their approval.
Some, including the U.S. Secret Service, said they had tested the service’s free trial. The U.S. Park Police said it stopped using the service last June, the same month that the Fish and Wildlife Service reported it had purchased an annual subscription, the GAO reported.
Many federal agencies said they used the software by requesting that officials in state and local governments run searches on their own software and report the results. Many searches were routed through a nationwide network of “fusion centers,” which local police and federal investigators use to share information on potential threats or terrorist attacks. U.S. Customs and Border Protection, for instance, told the GAO it used Clearview’s software free by requesting help from an agent stationed at a fusion center in New York.
Immigration and Customs Enforcement, which The Washington Post has reported uses databases of driver’s licenses, license plates and private utility records to pursue immigration violations or other crimes, said it had awarded a contract to enhance its facial recognition system with a database of “transnational gang members.”
Ten agencies also said they were researching how to upgrade or build new facial recognition features, such as improving the systems’ ability to recognize people wearing masks. The State Department said it had researched how the process of aging could affect the systems’ accuracy for assessing children’s passport photos, the GAO report said.
The Department of Transportation said it had researched monitoring systems to analyze a commercial truck driver’s eyes for signs of distraction, drowsiness or fatigue, similar to the systems used today by Amazon to monitor its delivery drivers. The agency said it also had studied eye-tracking systems to measure the alertness of train drivers and air traffic controllers.
The GAO report did not address questions of how effective the systems are. But privacy advocates have argued that the trade-offs and risks surrounding the technology’s expansion are often not worth the result.
U.S. Customs and Border Protection officials, who have called the technology “the way of the future,” said earlier this month that they had run facial recognition scans on more than 88 million travelers at airports, cruise ports and border crossings. The systems, the officials said, have detected 850 impostors since 2018 — or about 1 in every 103,000 faces scanned.