The Black Lives Matter movement notched a win in Silicon Valley this week, turning police use of facial recognition technology into a litmus test for Big Tech’s support of civil rights.

Now everyone protesting for police reform needs to hold the companies to it.

Like dominoes in a lineup of corporate public relations stunts, first IBM, then Amazon and finally Microsoft (at a Washington Post Live event) said they won’t sell or would at least pause police use of their facial recognition technology until there are federal laws on the matter.

Never mind that none of these companies were major players in the police facial recognition market. (Microsoft admitted it has never sold the tech to U.S. police.) But civil rights leaders and privacy advocates I’ve spoken to this week tell me that what they need is for Big Tech to stop arresting their legislative efforts to make the technology off-limits.

So far, Silicon Valley’s record has mostly put it at odds with groups like Color of Change, the NAACP and the American Civil Liberties Union. In fact, a Microsoft employee in Washington state wrote a new law on facial recognition software opposed by many civil rights groups for not being tough enough.

What happens next will impact the lives of many Americans. Facial recognition technology uses photos to help computers identify people. You might have already encountered it to unlock your phone or board an airplane. It can also be used to identify people who don’t even know they’re being watched, like at a protest.

It’s one of the most powerful surveillance tools ever invented, but even a federal government study found it to be less accurate at identifying minorities and women. Ramping up its use could, in theory, help keep criminals from escaping arrest — but it also opens a slippery slope to a world of supercharged policing that’s likely to disproportionately impact people of color through misidentification or just more surveillance of minority communities.

Amazon also owns the connected doorbellmaker Ring, which privacy groups have criticized for partnering with hundreds of police forces, granting them potential access to camera footage of many American streets. Ring doesn’t offer facial recognition, but its video can be shared with police who have it. (Amazon chief executive and founder Jeff Bezos owns The Washington Post.)

There’s no evidence of police using facial recognition technology to make arrests of people protesting the death of George Floyd, though it may take time for those records to emerge. Police in dozens of U.S. cities have access to the tech and in several have explicitly asked citizens to share images of protesters.

What changed this week is that facial recognition got linked to police racism, the issue that’s gotten Americans angry enough to protest during a pandemic and made the tech politically toxic. Previously, privacy advocates (including me) had linked it to less urgent-sounding concerns like surveillance and squashed speech.

To be clear, this week’s announcements alone won’t likely do much to stop the use of this technology by law enforcement. The most important players in the murky market, such as NEC Corp., Idemia and Clearview AI, are lesser-known companies that have not joined in on the voluntary moratoriums.

“Clearview AI is also committed to the responsible use of its powerful technology and is used only for after-the-crime investigations to help identify criminal suspects,” the company said in a statement.

NEC said its technology could combat racism, by helping to “correct inherent biases, protect privacy and civil liberties, and fairly and effectively conduct investigations for social justice.” Idemia didn’t immediately reply to requests for comment.

The only thing that’s really going to stop police from using the tech is new laws.

That’s why the announcements by IBM, Amazon and Microsoft were a success for activists — a rare retreat by some of Silicon Valley’s biggest names over a key new technology. This came from years of work by researchers including Joy Buolamwini to make the case that facial recognition software is biased. A test commissioned by the ACLU of Northern California found Amazon’s software called Rekognition misidentified 28 lawmakers as people arrested in a crime. That happens in part because the systems are trained on data sets that are themselves skewed.

Yet opponents of facial recognition technology say the problems go far beyond bad software. “Yes, accuracy disparities mean already marginalized communities face the brunt of misidentifications,” said Buolamwini, who founded an organization called the Algorithmic Justice League. “But the point isn’t just that facial recognition systems can misidentify faces, it’s that the technology itself can be used in biased ways by governments or corporations.”

For example, more cameras could be pointed at minority neighborhoods, used to target immigrants or even people who join protests about police brutality.

Buolamwini has asked tech companies to sign a pledge that would prohibit the use of the their technology in contexts in which lethal force may be used, including by police or the military. (So far, none of the big ones have.)

“There are too many ways in which it can be recklessly applied, and too few examples of the ways in which it serves a fundamental public good,” said Brandi Collins-Dexter, senior campaign director at Color of Change.

That’s why she and others are calling for not just better facial recognition tech but a stop to its use by governments.

A half-dozen cities, such as San Francisco, already have those sorts of laws. On Tuesday, Boston held a hearing about adopting a ban, during which Police Commissioner William Gross said he didn’t want to use the tech because it was faulty.

The challenge, say opponents of facial recognition technology, is that tech companies want to say they support civil rights without actually putting significant limits on potential business upside. There are potential military, international and corporate contracts at stake, largely missing from this week’s promises. And weak laws could end up legitimizing police use of the tech.

Microsoft in particular has been trying to have it both ways. Last week, chief executive Satya Nadella told employees in a blog post that the company would support racial justice to honor the death of Floyd. A day earlier, the company was fighting some 65 civil rights organizations in California to push a bill to authorize police and companies to use facial recognition tech with some restrictions that fall far short of a moratorium.

Microsoft didn’t get its way in California: AB 2261 failed in the state’s legislature last week.

Microsoft was the first big tech company to call for laws on facial recognition technology back in 2018 and has been the most visible in statehouse and city hall hearings. It says it opposes use of facial recognition software for mass surveillance, racial profiling or other violations of basic human rights and freedoms.

“We need to use this moment to pursue a strong national law to govern facial recognition that is grounded in the protection of human rights,” Microsoft President Brad Smith said at The Post event Thursday. “If all the responsible companies in the country cede this market to those that are not prepared to take a stand, we won’t necessarily serve the national interest or the lives of the black and African American people.”

But the company’s legislative stance so far has boiled down to: put some rules in place, sure, but not a moratorium on it.

The devil is in the details. In Washington state, Microsoft supported legislation — sponsored by state Sen. Joe Nguyen, a Microsoft employee — that outlines some of how the government can use the tech and requires agencies to produce accountability reports. It also addresses accuracy concerns by saying government agencies can only use the tech if it comes from a developer that makes its software available for testing.

But opponents said the Washington law comes with too few limits and enforcement measures. “Agencies may use face surveillance without any restrictions to surveil entire crowds at football stadiums, places of worship, or on public street corners, chilling people’s constitutionally protected rights and civil liberties,” the ACLU of Washington said.

Microsoft’s announcement this week that it wouldn’t sell to police until there is a federal law “should feel like winning, but it feels more like a thinly veiled threat,” said Liz O’Sullivan, technology director of the Surveillance Technology Oversight Project. The Washington law, she said, would be a bad model for Congress.

“They’re seeding the conversations around facial recognition regulation in a number of states by lobbying for bills that might look to a lot of people like they’ve got really strict protections. But then, if you actually look at them, they don’t really actually regulate the technology much as it’s used,” said Jameson Spivack, a policy associate at Georgetown Law’s Center on Privacy and Technology. “It’s their way of getting ahead of the opposition and co-opting the movement.”

Amazon, which didn’t reply to my requests for comment, has said less in public about its legislative goals, aside from calling for federal privacy and legislation on facial recognition tech. One pressing issue for any national legislation is whether it would overrule state and local laws that might be more strict.

“Legislatures and activists and civil rights groups are already leading on this issue,” said Matt Cagle, technology and civil liberties attorney for the ACLU of Northern California. “We just hope that companies like Microsoft see that and stand with us rather than against us.”