with Aaron Schaffer
(Amazon founder and chief executive Jeff Bezos owns The Washington Post.)
Amazon's announcement prompted privacy and civil rights advocates to turn up the pressure on lawmakers to put greater limits on facial recognition.
Researchers have found that the software, including Amazon’s Rekognition product, are more likely to falsely identify minorities. (Amazon has disputed those findings).
In light of these studies, lawmakers from both parties have put forward proposals to curtail facial recognition software in recent months, but the proposals haven’t yet made it to President Biden’s desk.
Privacy advocates welcomed Amazon’s announcement, but they say it underscores the need for broader action from Washington. Even though the technology is controversial, it continues to be used in a wide range of law enforcement cases, including notably in the investigation of the Jan. 6 Capitol attack.
“The threats posed last year by police use of face recognition technology are identical today,” Nathan Freed Wessler, deputy director of the ACLU’s Speech, Privacy, and Technology Project, said in a statement. “Now, the Biden administration and legislatures across the country must further protect communities from the dangers of this technology by ending its use by law enforcement entirely, regardless which company is selling it.”
Microsoft made a similar promise to stop selling the product to police, and IBM last year said it abandoned its general purpose facial recognition software amid concerns the software could be abused to violate human rights. But there are also hundreds of other similar algorithms available, and in some instances they’re being marketed to police. Meanwhile, there have been at least three lawsuits in the United States – all brought by Black men – that raise questions about the accuracy of the technology.
Major tech companies have been heavily lobbying on the issue as lawmakers weigh proposals.
In a 2019 blog post, Amazon suggested new limits on the technology, but also said “new technology should not be banned or condemned because of its potential misuse.”
Yet advocates say that Congress can't leave critical decisions about the appropriate use of facial recognition up to industry, and they're calling for a full-out federal ban.
“Basically, at any time Amazon could flip the switch and resume selling facial recognition technology to the police,” said Evan Greer, deputy director of the digital rights advocacy group Fight for the Future. “Facial recognition technology is too dangerous for it to be implemented at the whims of corporations like Amazon. We need Congress to take action and pass a federal ban on facial recognition now.”
Lawmakers could pass legislation targeting the software, as pressure mounts for them to act on both police reform and data privacy.
Lawmakers from both parties have expressed concerns about law enforcement's growing use of facial recognition software.
The House earlier this year passed the George Floyd Justice in Policing Act, which would specifically ban police from using facial recognition in body cameras. It would also direct the Government Accountability Office to study law enforcement use of facial recognition. The bill faces significant obstacles in the 50-50 Senate, though Sen. Tim Scott (R-S.C.) has said he hopes lawmakers can reach a bipartisan deal to address police reform.
Additionally Sens. Ron Wyden (D-Ore.) and Rand Paul (R-Ky.) earlier this year announced the Fourth Amendment Is Not For Sale Act, which takes aim at the popular Clearview AI facial recognition program that’s used by hundreds of police departments across the country.
Unlike traditional facial recognition systems that use photos from drivers’ licenses or jail mug shots, Clearview pulls photos from social networks and other websites. Facebook, Google and Twitter have accused the company of breaking its rules. Wyden and Paul’s bill would block law enforcement agencies from buying data that was “illegitimately obtained” via deception or breach of contract.
A patchwork of state and city facial recognition laws have emerged in the absence of congressional action.
Cities and states have been taking matters into their own hands. Cities including San Francisco have banned police and other city agencies from using facial recognition technology. California has a moratorium that prevents law enforcement from using facial recognition in body cameras, and Illinois and other states have laws governing use of biometric data.
Virginia, where Amazon plans to open a second headquarters, recently passed a law that will take effect July 1 that is one of the strictest measures in the country. No local law enforcement agency will be allowed to purchase or deploy the software without prior legislative approval.
Rant and rave
Former VentureBeat executive editor Emil Protalinski noted that Amazon still has other business with law enforcement. Many police departments still have partnerships to see footage from Ring cameras:
I assume this doesn't conflict with Amazon giving police departments access to the Ring surveillance network? https://t.co/39ifLau2zT https://t.co/UHICadEdpy— Emil Protalinski (@EPro) May 18, 2021
Motherboard’s Edward Ongweso Jr.:
when are they gonna END it tho? https://t.co/PjahKK9OeK— Edward Ongweso Jr (@bigblackjacobin) May 18, 2021
Our top tabs
Pinterest set hiring targets for female leaders and people of color after facing allegations of gender and racial bias.
CEO Ben Silbermann's announcement came six months after the company reached a $22.5 million settlement to a former executive who alleged gender discrimination and retaliation, the Wall Street Journal’s Sarah E. Needleman reports.
“What can come out of that is learning what we need to do better and making changes,” Silbermann said in his first interview since the settlement. “I’m trying to personally set that better tone.”
Pinterest, which has about 2,700 employees, has been at the center of Silicon Valley’s conversation about race and gender. Last year, two Black female ex-Pinterest employees said they had been subjected to racist comments from a manager and retaliation. They also said they were underpaid. The company now says it has achieved “pay equity" across its U.S. workforce by race and gender.
TikTok is recommending homophobic videos, a media watchdog group reports.
Media Matters for America, a group founded by liberal activist David Brock, found that users who “liked” a homophobic video on the platform would get recommendations for similar videos. The recommendations came despite TikTok policies forbidding “hateful behavior” based on categories including sexual orientation and gender identity.
The recommended videos became more exclusively homophobic as more of the videos were “liked,” the group said. It pointed to videos on the platform that included slurs, users celebrating being homophobic and encouraging the destruction of rainbow flags.
“TikTok is committed to supporting and uplifting LGBTQ+ voices, and we work to create a welcoming community environment by removing anti-LGBTQ+ videos and accounts that attempt to spread hateful ideas on our platform,” TikTok spokeswoman Jamie Favazza said in an emailed statement.
Lawmakers criticized Facebook’s plans to build a version of Instagram for children at a hearing on online safety.
Sen. Edward J. Markey (D-Mass.) used the hearing as an opportunity to push his proposal for online children’s privacy legislation, while three experts who testified before a Senate Commerce Committee panel expressed their opposition to the Instagram spinoff, the Hill’s Rebecca Klar reports.
“Facebook has not earned our trust to start doing children’s services in this way,” said Baroness Beeban Kidron, the founder of 5Rights Foundation.
A Facebook spokesperson defended the company's plans to push forward with a children's version of the photo-sharing service and promised to work with regulators. “As every parent knows, kids are already online,” the person said in a statement. “We want to improve this situation by delivering experiences that give parents visibility and control over what their kids are doing.”
Inside the industry
- NetApp and NI have joined TechNet as members.
- The IBM Policy Lab hosts a discussion of the European Union’s proposal to regulate artificial intelligence on Thursday at 8:30 a.m.
- The Senate Commerce Committee meets to consider Eric Lander, President Biden’s nominee to lead the White House’s Office of Science and Technology Policy, on Thursday at 10 a.m.
- Political theorist Langdon Winner discusses technology and democracy at an event hosted by the University of Washington’s Tech Policy Lab on Thursday at 8:30 p.m.
Before you log off
Today’s second @washingtonpost quarantine TikTok features the current state of the crypto market https://t.co/5O4h9FF8V0 pic.twitter.com/fIIY6bfv7j— Washington Post TikTok Guy 🕺🏼 (@davejorgenson) May 18, 2021