Facial recognition isn't science fiction -- it's used by Facebook, in airports and even in churches. But there aren't any federal laws that regulate how businesses can use the technology. In fact, a new report from the Government Accountability Office says the government doesn't even really know where or how facial recognition tech, which typically uses software to verify or identify a person based on images of their face, is being used by companies in the United States.
"Facial recognition technology is currently being used in a number of commercial applications in the United States, but the full extent of its present use is not known," the report says. "The International Biometrics & Identification Association, other industry trade organizations, and [Federal Trade Commission] staff told us they knew of no comprehensive reliable information on the extent to which U.S. businesses use facial recognition technology."
If the government doesn't know how it's being used, consumers are even less likely to understand the technology's privacy implications, said Alvaro Bedoya, the executive director of the Center on Privacy & Technology at Georgetown Law. "It's remarkable that the government's lead investigative organization really has no idea how broadly this technology is being deployed — it speaks to how far we are from adequate privacy protections," he said.
No federal regulations explicitly deal with facial recognition tech. But there are some laws about how companies use personal information in the medical and financial industries that may potentially apply to some uses of facial recognition tech, the GAO report noted. The Federal Trade Commission could also go after companies if the way they use it doesn't line up with their privacy policies. And Texas and Illinois have also passed biometric privacy laws that privacy advocates believe require companies to get people's consent before they use the tech on them.
The Department of Commerce's National Telecommunications and Information Administration is currently running a series of multi-stakeholder meetings aimed at coming up with a set of voluntary rules for commercial uses of facial recognition technology. But privacy groups dropped out of the meetings earlier in the summer, saying that they didn't believe the process would result in any meaningful protections for consumers.
Industry advocates, such as Carl Szabo, policy counsel at online advertising trade group NetChoice, say that more transparency should be enough to alleviate much of the privacy concerns raised by consumer groups. "Once people know facial recognition technology is being used, they can react," he said. Transparency requirements would also give groups like the GAO more insight into how exactly the tech is being used, Szabo said.
Bedoya and other consumer advocates say that isn't enough — instead arguing that consumers also need to have meaningful ways to consent to being caught up in facial recognition systems. The issue was central to privacy group's decision to bow out of the NTIA process, he said.
But Szabo argued that express consent doesn't always make sense for the way that facial recognition tech might be used. If someone has a facial recognition system with a camera to verify the identities of employees entering a secure work place, he argued, the system would have consent from the employees, but not necessarily anyone who walked by the camera and was automatically compared against a database of people approved for access.
"The idea of consent as a general concept is great, but once you try to apply it to how it actually works it's not so simple," Szabo said.