“These technologies have the potential to enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale,” the attorneys wrote. “Such surveillance would permit the government to pervasively track people’s movements and associations in ways that threaten core constitutional values."
The DEA and FBI said they don’t comment on pending litigation. The DOJ declined to comment.
The lawsuit marks a new chapter in growing resistance to the technology, which has quickly become a far-reaching presence in people’s lives with little to no legislative approval or public debate.
Federal investigators and local police agencies nationwide now routinely use facial-recognition software to look for potential suspects and witnesses to crime, scanning hundreds of millions of Americans’ photos, including from state driver’s license databases. Facial recognition is also used to unlock cellphones, monitor public venues, and guard the entryways of schools, workplaces and housing complexes, scanning visitors’ faces to grant them access or alert security officials.
Government and law enforcement officials have argued that the software offers a powerful investigative tool that can more quickly pinpoint dangerous suspects. But some lawmakers and privacy advocates argue that the systems erode American protections against government surveillance and unlawful searches by scanning people without their knowledge or consent, and that inaccuracies in the systems could undermine criminal prosecutions, unfairly target people of color and lead to false arrests.
ACLU attorneys wrote that the agencies have not responded to requests to provide legal, policy and training records first sought in January under the Freedom of Information Act. Those documents could help outline how many times facial-recognition software has been used in arrests, how the searches are used by local and state law-enforcement agencies, and how accurate the systems are required to be for real-world use.
The attorneys also requested records related to government use of voice- and gait-recognition software, which could help identify people based on how they talk and walk. In a blog post announcing the lawsuit, Kade Crockford, a director at the ACLU’s Massachusetts office, wrote, “The dystopian surveillance technology threatens to fundamentally alter our free society.”
The FBI allows federal and local investigators to submit a “probe” photo of someone’s face and search against a database of more than 30 million criminal mug shots using its Next Generation Identification system, which the bureau calls “the world’s largest and most efficient electronic repository of biometric and criminal history information.”
More than 640 million facial photos, including from state driver’s license databases, are also available for search by an internal FBI unit known as Facial Analysis, Comparison and Evaluation, or FACE, the Government Accountability Office reported in June. That team has logged more than 390,000 facial-recognition searches from local, state and federal investigators since 2011.
Facial-recognition technology is designed by companies such as Amazon, Idemia and NEC and offered on a contract basis for government use. (Amazon founder Jeff Bezos owns The Washington Post.)
The technology’s use has spread rapidly across the government, including Customs and Border Protection, whose officials have said they want to run facial-recognition scans on 97 percent of all air travelers flying out of the country within the next four years. An FBI counterterrorism official said last year that the bureau was testing Amazon’s facial-recognition software Rekognition, arguing that it would have significantly reduced the time taken to identify the suspected gunman in the 2017 mass shooting in Las Vegas.
The largely unregulated technology has faced bipartisan anger in Congress, where lawmakers in multiple hearings have pressed federal officials for answers about what legal basis the agencies have to scan the faces of Americans not suspected of a crime.
In a letter last month to FBI Director Christopher A. Wray and DHS acting secretary Kevin McAleenan, a group of eight congressional Democrats and Republicans, including the leaders of the homeland security committees in the House and the Senate, voiced concerns about how the agencies used the technology and requested answers about standards and safeguards. They have not received a response.
Sen. Christopher A. Coons (D-Del.) said in a statement that he was concerned about potentially “invasive” facial-recognition systems, adding, “Congress needs to know more about how this technology is being deployed.” A representative for Sen. Rand Paul (R-Ky.) said the senator “continues to believe that unelected bureaucrats shouldn’t be enabled to run a surveillance state, especially without any oversight from Congress.”
New congressional measures could limit the technology’s use. Reps. Ayanna Pressley (D-Mass.), Yvette D. Clarke (D-N.Y.) and Rashida Tlaib (D-Mich.) introduced a bill this summer that would ban the technology’s use in public and assisted housing, citing concerns about “over-surveillance.”
The technology has also faced a growing wave of local resistance. Since San Francisco banned local officials from using the technology in May, five other municipalities in California and Massachusetts have followed suit.
California Gov. Gavin Newsom (D) this month also approved a law that temporarily bans police agencies in the state from using facial-recognition software in body cameras. The measure faced blowback from police unions including the Riverside Sheriff’s Association, which said it would block law-enforcement officers from having “the necessary tools they need to properly protect the public.”
A survey by the Pew Research Center released last month found that 56 percent of U.S. adults said they trusted police to use facial-recognition software responsibly. Young respondents and black respondents, however, were the least likely to trust law-enforcement use.
The ACLU has become a main opponent of the technology’s expansion, publishing tests they say throw the technology’s capability and accuracy into doubt. In one test last year, the photos of members of Congress were run through a database of 25,000 police mug shots using Rekognition software, and 28 lawmakers, including a disproportionate number of people of color, were incorrectly matched to people charged with a crime. The ACLU earlier this month published a similar test that showed 27 mismatches of New England professional athletes.
Amazon has disputed the findings, saying real-world users are instructed to use a higher “confidence” threshold that could lead to fewer mismatches. But local agencies are free to disregard those thresholds in their searches: A sheriff’s office in Oregon that used Rekognition told The Post earlier this year that each facial-recognition search returned five possible results, whether the system was highly confident in the match or not.
The systems’ accuracy is heavily dependent on factors such as database size and image quality, leading privacy advocates to worry about the potential ramifications of a false match. Police agencies have also used altered photos, artist sketches and celebrity look-alike images in facial-recognition searches, potentially skewing the results, according to public records revealed this spring by Georgetown Law’s Center on Privacy and Technology.
A study last year by researchers at Microsoft Research and the Massachusetts Institute of Technology’s Media Lab found that facial-analysis systems also were more accurate with people with lighter skin. Tests released this summer from the National Institute of Standards and Technology, the federal agency that analyzes facial-recognition algorithms, showed that the accuracy of widely used algorithms was improving in terms of accuracy. But critics argue that even a low error rate, when applied to hundreds of thousands of searches, could lead to scores of false arrests.