Now, it’s France’s turn in the spotlight. President Emmanuel Macron’s administration is set to be the first in Europe to use facial recognition when providing citizens with a secure digital identity for accessing more than 500 public services online, according to Bloomberg News’s Helene Fouquet. The roll-out is tainted by opposition from France’s data regulator, which argues the electronic ID breaches European Union rules on consent – one of the building blocks of the bloc’s General Data Protection Regulation laws – by forcing everyone signing up to the service to use the facial recognition, whether they like it or not. The only way for someone to enroll is to let the app take a selfie video and compare the various facial expressions to the person’s passport photo.
Neither this objection, nor a lawsuit filed by privacy advocacy group La Quadrature du Net, seem to have deterred the state, though.
How worried should Europeans be? We’re clearly a long way from the dystopian fiction of “1984” – France says it’s not going to be using this biometric data to keep tabs on people, and plans to delete it straight after the enrolment process. The fact that the GDPR exists and is being tested against state actors is a good sign, and leagues away from the state surveillance seen in a places like China. Lawsuits can be a prelude to a better system.
More broadly, the scramble that’s taking place among public and private actors to create online digital identities does have some potential upsides. For example, France is also planning to give tax authorities the power to harvest data from Facebook, Instagram and other social media to help detect fraud. Whether it’s via state IDs or by putting the private-sector Panopticons of Silicon Valley on a leash, less anonymity doesn’t have to be all bad – provided there are also clear paths to prevent abuse of power at the same time.
But there’s also plenty of evidence that the rules of engagement need to be tougher. The GDPR is good at setting the bar of consent, but it also offers loopholes if national security or the public interest are seen at risk. Live trials of real-time facial recognition by the police have taken place in the U.K. and France, along with the message that this is needed to keep people safe. This is not a risk-free technology: One review of a trial by the London Metropolitan Police found almost two-thirds of computer-generated matches judged initially credible turned out to be wrong. Protecting citizens from indiscriminate use of this technology would mean toughening up the haphazardly-applied GDPR.
This debate has historical precedent. The introduction of new passport standards after the First World War generated huge resistance, particularly around the passport photo, which for some was seen as a sign that they “can’t be trusted anymore.” Maybe in 100 years’ time, the thought of resisting iris scans or face-tracking will seem archaic. But all the more reason now to make sure the checks and balances are put in place to protect people from the obvious downsides – errors, function creep, bureaucratic abuse – that, if left unchecked, would lead to a much gloomier future.
To contact the author of this story: Lionel Laurent at firstname.lastname@example.org
To contact the editor responsible for this story: Melissa Pozsgay at email@example.com
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Lionel Laurent is a Bloomberg Opinion columnist covering Brussels. He previously worked at Reuters and Forbes.