28th July 2021

Advancements in biometric technology will prove challenging for EU lawmakers


In April, the European Commission proposed a regulation that would place strict safeguards on the use of artificial intelligence, with likely implications for global AI rules. The expansive rule book covers a lot of ground, but facial recognition, a touchy subject in much of privacy conscious Europe, is a key focus.

The plan includes prohibitions on a number of use-cases that are considered too dangerous to people’s safety or EU citizens’ fundamental rights, such as a China-style social credit scoring system or AI-enabled behaviour manipulation techniques that can cause physical or psychological harm.

There are also transparency requirements for certain use-cases of AI, such as chatbots and deepfakes, where EU lawmakers believe that potential risk can be mitigated by informing users that they are interacting with a program.

The planned law is intended to apply to any company selling an AI product or service into the EU, not just to EU-based companies and individuals—so, as with the EU’s data protection regime, it will be extraterritorial in scope.

Are the regulations too weak?

The biometrics business is on a collision course with Europe’s data protection experts. Both the European Data Protection Supervisor, which acts as the EU’s independent data body, and the European Data Protection Board, which helps countries implement GDPR consistently, have called for a total ban on using AI to automatically recognise people.

Both organisations have said that AI should not be used in public spaces for facial recognition, gait recognition, fingerprints, DNA, voice, keystrokes, or other types of biometrics. There should also be a ban on trying to predict people’s ethnicity, gender, political or sexual orientation with AI.

All of this flies in the face of the EU’s proposed regulations. Plenty of criticism has been levelled at its overly broad exemptions for law enforcement’s use of remote biometric surveillance (such as facial recognition), as well as measures in the regulation to address the risk of AI systems discriminating don’t go nearly far enough.

Notably, there is a significant gap in the proposal around ‘discriminatory and surveillance technologies’. The regulation may well allow too wide a scope for self-regulation by companies profiting from AI.

And, as with any legislative proposal domestic or at EU level, lobbyists and analysts have been quick to label the proposal as unwarranted red tape.

The Center for Data Innovation, a European and American technology, data, and policy think tank, claims the regulation will ‘kneecap the EU’s nascent AI industry before it can learn to walk’.

A student in China has her face and fingerprint scanned. (Photo credit: AFP)

Analysing biometric technologies before they analyse you

By the mid-2020s, it’s estimated that the global biometrics industry will be worth between $68.6 billion and $82.8bn.

Across the EU’s 27 member states, a number of companies have been developing and deploying biometric technologies that, in some cases, aim to predict people’s gender, ethnicity, and recognise their emotions. In many cases the technology is already being used in the real world. However, using AI to make these classifications can be scientifically and ethically dubious.

On the one hand, using this technology can help make our lives more convenient and potentially reduce fraud. On the other, it can be invasive and discriminatory. Bank cards are getting fingerprint scanners, airports are using facial recognition to identify people, police in Greece are deploying live facial recognition, and in the UK police are reportedly experimenting with AI that can detect if people are distressed or angry.

While regulators debate these laws—including whether to ban biometrics entirely—the technology is creeping further into our day-to-day lives. By the time legislation is in place, the technology may already be commonplace.

Biometric scanners are now increasingly common in European airports. (Photo credit: Stuart Bailey)

Where to next?

In the digital age our bodies are data goldmines. From the way we look to how we think and feel, biometric surveillance allows for alarming new ways to track everything we do—and in most cases, we may not even know we are being tracked. Allowing such an intrusive technology to develop unchecked and unrestricted is simply not an option.

The EU’s AI regulatory proposal kicks off the start of plenty of debate under the EU’s co-legislative process. The European Parliament and member states via the EU Council still need to have their say on the draft. That suggests that a lot could change before EU institutions reach agreement on the final shape of a pan-EU AI regulation.

Commissioners declined to give a timeframe for when legislation might be adopted, saying only that they hoped the other EU institutions would engage immediately and that the process could be done as soon as possible. It could, nonetheless, be several years before the regulation is ratified and comes in force.

About the author

Karl specialises in public and corporate affairs for clients operating in highly regulated environments. Prior to joining 360, Karl worked in public affairs and public relations with other communication consultancies. He has also worked for high-profile politicians in both the European Parliament and the Houses of the Oireachtas.

Join the Circle

Get 360’s intelligent communications updates, insights, and research delivered to your inbox every quarter.