Stephanie Hare is an independent researcher and author of the forthcoming book “Technology Ethics.”

In September, London’s most senior police officer, Cressida Dick, warned that Britain could be sleep-walking into “some kind of ghastly, Orwellian, omniscient police state” if it didn’t address the ethical dilemmas posed by facial recognition and artificial intelligence. Now, her prophecy is coming to pass.

Last week, the London Metropolitan Police announced that it will start using live facial recognition technology to identify criminal suspects in real time, in one of the largest experiments of its type outside of China. The entire world should be paying attention.

Since 2010, British police have been operating under severe strain as successive governments cut more than 20,000 police jobs. London’s police force, in particular, confronts a constant terror threat and a surging knife-crime epidemic.

Viewed through this lens, biometrics technologies seem like a solution. They aren’t.

There are many concerns with facial recognition technology. For a start, it has been found to disproportionately misidentify people of color, women, children and the elderly, according to a landmark U.S. federal study last month. This echoes earlier research in the United States, as well as the independent review commissioned by the Metropolitan Police, which found that its trials of live facial recognition in London were only 19 percent accurate, did not have “equal performance” across age, gender and ethnicity, and were unlikely to survive a legal challenge.

That’s not good enough for the people who live in and visit London, and it’s not good enough for the police, either.

Live facial-recognition technology will introduce two new risks in London: false positives and false negatives. False negatives occur when the technology fails to match a face with a face on the watch list, allowing suspects to go undetected. False positives, where technology misidentifies innocent people, are likely to worsen community relations with the police — especially among those who are already stopped and searched more than others.

The technology could also increase the chances of fatal mistakes. In 2005, London police officers stormed a train and shot dead Jean Charles de Menezes, a 27-year-old Brazilian electrician who they had incorrectly identified, by human error, as a suspected terrorist. The Metropolitan Police spent years fighting legal action and paid hundreds of thousands of pounds in taxpayer money in compensation and fines. Given the low accuracy rate of facial recognition technology, what steps are in place to prevent such tragedies from happening today?

To make matters worse, the Home Office, which is in charge of all British law enforcement, has failed to comply with a High Court ruling in 2012 that makes it unlawful to keep the facial data of individuals taken into custody and released without charge or acquitted. It has claimed that it would be too expensive to delete all these images. That’s why Biometrics Commissioner Paul Wiles estimates that Britain’s national police database contains images of hundreds of thousands of “individuals who have never been charged with, let alone convicted of, an offense.” This means that these people are at risk of having their facial image matched to a facial image that should never have been on the police watch list in the first place.

This is part of a larger issue: there is no proper legal framework for live facial-recognition technology. The biometrics commissioner, the surveillance camera commissioner and the information commissioner have all called repeatedly for a proper legislative framework for facial recognition and other biometric and surveillance technologies. Several technology companies have too, including Google, whose chief executive, Sundar Pichai, called for a moratorium on facial recognition technology earlier this month.

The Science and Technology Committee in Britain’s House of Commons has also called for a moratorium, and Lord Clement-Jones has introduced a bill for a moratorium and review of live facial recognition technology in the House of Lords. At the very least, London’s police should explain why it is ignoring all of these warnings.

British law enforcement requires warrants to search homes, businesses and phones. Why do these areas enjoy greater legal protections than our faces, which links our physical and digital selves? And how is it reasonable and proportionate to subject us to a real-time, 24/7 digital dragnet that jeopardizes our privacy, anonymity and many of our civil liberties, when we could create a legal requirement for targeted, geographic-and-time limited face searches backed by warrants? These are questions the Metropolitan Police must answer.

This is not just about the rights of Londoners. Britain is the first liberal democracy to implement on such a scale the kind of authoritarian technology we’re more accustomed to seeing in places such as China. That it is doing so without public consultation, a legislative framework or judicial oversight imperils our civil liberties and the rule of law. What a disturbing example to set for the world.

Can Britain prevent police use of a mass surveillance tool from falling into the same pitfalls of authoritarianism already underway elsewhere in the world? Or will London become the first liberal democratic Orwellian surveillance city and kickstart an alarming global trend?

We’re about to find out. And we’re all watching — even as we’re being watched.

Read more: