… 1 year ago, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) called for a broad ban on any use of artificial intelligence (AI) for automated recognition of human features in public spaces in any context. A clear definition of AI in EU law remains to be seen, although the AI Act draft is providing some clues by proposing a list of ‘AI techniques and approaches’ to develop software which include machine learning, logic and knowledge-based approaches, and statistical approaches so far).
… 6 months ago, in a January article, we looked at some landmark data protection issues raised by facial recognition technology (FRT), giving them the attention they deserve due to the overwhelming impact FRT can have on the public. We highlighted some risks and benefits of FRT in the law enforcement context.
… Since then, Clearview AI, a New-York based FRT company, has been fined €20 million in Italy and £7.5 million in the UK. It has already been banned from selling its database in Illinois, ordered to delete data in France, and deemed illegal in Canada and Australia. Hikvision, with its thousands of networks in London, may be the next target to scrutinize.
… A few days ago, the Ada Lovelace Institute published both (i) the Ryder Review, an independent legal review led by Matthew Ryder QC on the governance of biometric data in England and Wales, which found that the legal protections in place are defective on many fronts, and (ii) a report calling for new legislation in this field with policy recommendations.
Regularly, private surveillance companies are being pursued, denounced, challenged, condemned.
And still, there is no regulatory answer.
The use of FRT in the public sector use should call for particular attention, thorough preliminary assessments, and continuous monitoring to verify the accuracy, reliability, and proportionality of such solutions, in line with data protection regulations. But as it stands, the situation looks more like the wild west, in that there are no rules.
Positive use cases
Of course, the technology itself may be applied in the public interest, from law enforcement helping to find criminals, to healthcare by identifying symptoms or diagnosing medical and genetic conditions.
We use it to simplify some aspects of our daily lives, such as in airports or to unlock phones. There is also some positive support for the use of FRT by the police, which is seen as a modern advancement to protect citizens from harm.
However, FRT is legitimately raising more and more concerns, among privacy fellows and broader communities.
One of the first areas put under the spotlight is the accuracy of the technology. In the UK, the Met Police FRT software was allegedly 89% inaccurate, fed with outdated data and likely not compliant with Human Rights laws. Some progress has probably been made, however, more work to ensure that technologies are not exacerbating over-policing but rather providing a neutral tool not replicating and exaggerating biases is needed. Also, more assurances in terms of security measures around the processing should be given such as precisions about the storage, the retention and the potential transfers of the data captured by FRT.
Then, there is an underlying risk of abuse of power and authority by the ones using RFT. More effort could be done to evidence the efficiency of these measures. Collecting huge amounts of data is helpless if the means and tools to properly analyse them and draw conclusions from them are not allocated. At the moment, there is a crying lack of guidelines around where, when, and in what context FRT may be used, which allows for the distrust in governments, public agencies and police to grow.
Additionally, the impacts of RFT on rights and freedoms, especially freedom of speech, freedom of movement, and the right to private life, are still to be assessed. A generalised and purposeless use of FRT leads to bulk surveillance that annihilates anonymity in the public space, raising concerns about conformism incompatible with the notions of freedom and democracy.
We can easily doubt that the principles of data minimisation and purpose limitation are observed when looking at current FRT uses. Moreover, there is just no transparency about why the data collected by all these cameras are being used and where it is going.
Public opinion and trust as criteria for successful implementation
Privacy concerns are linked with the level of trust and legitimacy that people are attributing to FRT uses. The more citizens that have confidence in the organisations and authorities using FRT and feel that such processing is legitimate, the more acceptable it will be considered by the wider public. This feeling of trust will depend on each use case, why FRT is used, how stakeholders communicate about it, and how they involve the public in the discussions.
In the Ada Lovelace Institute 2019 survey, 77% of people said they were uncomfortable with FRT being used in shops to track customers and 76% were uncomfortable with it being used by HR departments in recruitment. Unsurprisingly the numbers are higher within the black, Asian and minority ethnic groups due to biases, or the fear of biases, that they face.
Another research, based on YouTube comments, found that 75.4% of the comments expressed negative perceptions of police use of FRT. This study also highlighted that there may be public pressure, and a feeling of a social obligation to approve the use of FRT by the police. Indeed, the results differ depending on whether the respondents’ answers were anonymous or not. And the anonymous report findings differ, evidencing a lower level of support for FRT when anonymity was enabled (demonstrating why anonymity preserves freedom in that case).
On average in the EU, in 2019, only 17% of the respondents would be willing to share their facial images for identity with public authorities, and only 6% with private companies.
Despite all these penalties, concerns and findings, FRT is being developed amid almost general indifference from the regulators, exposing the sensitive personal data of thousands. Biometric data needs more protection and FRT, like any other data processing, must be comprehensively assessed to demonstrate that it is necessary for the aims pursued.
 EDPB & EDPS call for ban on use of AI for automated recognition of human features in publicly accessible spaces, and some other uses of AI that can lead to unfair discrimination | European Data Protection Board (europa.eu)