The wild, wild west of facial recognition. How can the public gain trust in its use?

… 1 year ago, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) called for a broad ban on any use of artificial intelligence (AI) for automated recognition of human features in public spaces in any context[1]. A clear definition of AI in EU law remains to be seen, although the AI Act draft is providing some clues by proposing a list of ‘AI techniques and approaches’ to develop software which include machine learning, logic and knowledge-based approaches, and statistical approaches so far[2]). 

… 6 months ago, in a January article, we looked at some landmark data protection issues raised by facial recognition technology (FRT), giving them the attention they deserve due to the overwhelming impact FRT can have on the public. We highlighted some risks and benefits of FRT in the law enforcement context[3]

… Since then, Clearview AI, a New-York based FRT company, has been fined €20 million in Italy and £7.5 million in the UK. It has already been banned from selling its database in Illinois, ordered to delete data in France, and deemed illegal in Canada and Australia. Hikvision, with its thousands of networks in London, may be the next target to scrutinize[4]

… A few days ago, the Ada Lovelace Institute published both (i) the Ryder Review[5], an independent legal review led by Matthew Ryder QC on the governance of biometric data in England and Wales, which found that the legal protections in place are defective on many fronts, and (ii) a report calling for new legislation in this field[6] with policy recommendations.

Regularly, private surveillance companies are being pursued, denounced, challenged, condemned. 

And still, there is no regulatory answer. 

The use of FRT in the public sector use should call for particular attention, thorough preliminary assessments, and continuous monitoring to verify the accuracy, reliability, and proportionality of such solutions, in line with data protection regulations. But as it stands, the situation looks more like the wild west, in that there are no rules.   

Positive use cases

Of course, the technology itself may be applied in the public interest, from law enforcement helping to find criminals, to healthcare by identifying symptoms or diagnosing medical and genetic conditions. 

We use it to simplify some aspects of our daily lives, such as in airports or to unlock phones. There is also some positive support for the use of FRT by the police, which is seen as a modern advancement to protect citizens from harm[7]

Concerns

However, FRT is legitimately raising more and more concerns, among privacy fellows and broader communities. 

One of the first areas put under the spotlight is the accuracy of the technology. In the UK, the Met Police FRT software was allegedly 89% inaccurate, fed with outdated data and likely not compliant with Human Rights laws[8]. Some progress has probably been made, however, more work to ensure that technologies are not exacerbating over-policing but rather providing a neutral tool not replicating and exaggerating biases is needed. Also, more assurances in terms of security measures around the processing should be given such as precisions about the storage, the retention and the potential transfers of the data captured by FRT. 

Then, there is an underlying risk of abuse of power and authority by the ones using RFT. More effort could be done to evidence the efficiency of these measures. Collecting huge amounts of data is helpless if the means and tools to properly analyse them and draw conclusions from them are not allocated. At the moment, there is a crying lack of guidelines around where, when, and in what context FRT may be used, which allows for the distrust in governments, public agencies and police to grow.

Additionally, the impacts of RFT on rights and freedoms, especially freedom of speech, freedom of movement, and the right to private life, are still to be assessed. A generalised and purposeless use of FRT leads to bulk surveillance that annihilates anonymity in the public space, raising concerns about conformism incompatible with the notions of freedom and democracy. 

We can easily doubt that the principles of data minimisation and purpose limitation are observed when looking at current FRT uses. Moreover, there is just no transparency about why the data collected by all these cameras are being used and where it is going. 

Public opinion and trust as criteria for successful implementation 

Privacy concerns are linked with the level of trust and legitimacy that people are attributing to FRT uses[9]. The more citizens that have confidence in the organisations and authorities using FRT and feel that such processing is legitimate, the more acceptable it will be considered by the wider public. This feeling of trust will depend on each use case, why FRT is used, how stakeholders communicate about it, and how they involve the public in the discussions. 

In the Ada Lovelace Institute 2019 survey, 77% of people said they were uncomfortable with FRT being used in shops to track customers and 76% were uncomfortable with it being used by HR departments in recruitment[10]. Unsurprisingly the numbers are higher within the black, Asian and minority ethnic groups due to biases, or the fear of biases, that they face.

Another research, based on YouTube comments, found that 75.4% of the comments expressed negative perceptions of police use of FRT[11]. This study also highlighted that there may be public pressure, and a feeling of a social obligation to approve the use of FRT by the police. Indeed, the results differ depending on whether the respondents’ answers were anonymous or not. And the anonymous report findings differ, evidencing a lower level of support for FRT when anonymity was enabled (demonstrating why anonymity preserves freedom in that case).

On average in the EU, in 2019, only 17% of the respondents would be willing to share their facial images for identity with public authorities, and only 6% with private companies[12]

Despite all these penalties, concerns and findings, FRT is being developed amid almost general indifference from the regulators, exposing the sensitive personal data of thousands. Biometric data needs more protection and FRT, like any other data processing, must be comprehensively assessed to demonstrate that it is necessary for the aims pursued. 

  [1] EDPB & EDPS call for ban on use of AI for automated recognition of human features in publicly accessible spaces, and some other uses of AI that can lead to unfair discrimination | European Data Protection Board (europa.eu)

[2] https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN 

[3] ‘Little-Known Start-Up’ to ‘World’s Largest Facial Network’: An Update on Clearview AI | Information Governance Services

[4] Hikvision: The world’s biggest surveillance company you’ve never heard of | MIT Technology Review

[5] The-Ryder-Review-Independent-legal-review-of-the-governance-of-biometric-data-in-England-and-Wales-Ada-Lovelace-Institute-June-2022.pdf (adalovelaceinstitute.org) 

[6] Countermeasures-the-need-for-new-legislation-to-govern-biometric-technologies-in-the-UK-Ada-Lovelace-Institute-June-2022.pdf (adalovelaceinstitute.org)

[7] Full article: ‘Only in our best interest, right?’ Public perceptions of police use of facial recognition technology (tandfonline.com)

[8] Stop Facial Recognition — Big Brother Watch

The Met Police’s facial recognition tests are fatally flawed | WIRED UK

[9] Live Facial Recognition: Trust and Legitimacy as Predictors of Public Support For Police Use of New Technology | The British Journal of Criminology | Oxford Academic (oup.com)

[10] Countermeasures: the need for new legislation to govern biometric technologies in the UK | Ada Lovelace Institute

[11] Full article: ‘Only in our best interest, right?’ Public perceptions of police use of facial recognition technology (tandfonline.com)

[12] Your rights matter: data protection and privacy (europa.eu) 

Share:

More Posts

Send Us A Message