Firm, Faces, Fines: £17m intended privacy fine for misuse of facial recognition software

Once again, the advances and uses of technology and AI is clashing with data privacy regulations. This is an issue which keeps occurring, and will continue to occur throughout the fourth industrial revolution. Whilst this is not the first time this particular issue has come under the spotlight, this particular clash is concerning the use, or misuse, of facial recognition AI technology.

On 29th November 2021, the Information Commissioner’s Office (ICO), in a joint decision with the Office of the Australian Information Commissioner (OAIC), issued a provisional intent to impose a fine of £17 million to a tech company, Clearview AI. The ICO has also instructed Clearview AI to stop processing data of UK citizens in its database. This means that the Clearview AI has the opportunity to make representations regarding the intended decisions, which will be considered by the ICO before a final decision is made in 2022.

Clearview AI are a US based company, acting internationally, who provide a “web-based intelligence platform for law enforcement to use as a tool to generate high-quality investigative leads”[1] The technology is based off of a facial recognition technology platform, which purports to have a database of over 10 billion facial images which have scraped from the internet. Amongst these individuals collected, are UK citizens.

These images can then be used by law enforcement agencies to aid in their investigations. An example of its use may be when a law enforcement officer has a photograph of a suspect, but is unsure of who the suspect is based on the photograph. The officer could then upload the photo of the suspect to the Clearview AI platform, which will cross-reference the photograph through the database to see if the photograph matches any faces which Clearview AI has scraped from the internet. It is understood that Clearview AI had been previously trialled by UK enforcement agencies, but was discontinued and currently has no UK customers who buy into its services.

The Chief Executive of Clearview AI, Hoan Ton-That, has stated that the ICO has misinterpreted Clearview AI’s technologies and intentions, imploring that the company is acting in the best interests of the UK, and that the data has been collected only from the open internet, and that they comply with privacy legislation.

In contrast to the pleading of Clearview AI, the ICO’s preliminary view found that Clearview AI’s collection of data belonging to UK citizens contravened data protection legislation by:

  • failing to process the information of people in the UK in a way they are likely to expect or that is fair;
  • failing to have a process in place to stop the data being retained indefinitely;
  • failing to have a lawful reason for collecting the information;
  • failing to meet the higher data protection standards required for biometric data (classed as ‘special category data’ under the UK GDPR);
  • failing to inform people in the UK about what is happening to their data; and
  • asking for additional personal information, including photos, which may have acted as a disincentive to individuals who wish to object to their data being processed.[2]

The ICO have effectively (preliminarily) found that Clearview AI have acted in a unilateral and ultra vires manner in their collection of data. The concept of a database of this nature is one which is alarming and frightening for many individuals, particularly when such data has been scraped in a covert manner. Given that there are over 10 billion images on Clearview’s AI, and that they have scraped the data through social media. There is a good chance that if you are reading this article now, and you have any pictures on social media, that your face is one of which is on Clearview AI’s database. As such, this is very likely to affect you in some way or another.

Whilst the idea of a database to aid in law enforcement is a notion which many will be sympathetic to, much like with the use of covert and non-covert surveillance, there has to be a balancing act between the protection and rights and freedoms of individuals against the effective operations of law enforcement. Much like in the case of Bridges, R (On the Application of) v South Wales Police [2020][3], which concerned the use of surveillance technologies, it was not necessarily the use of technology which was the primary issue, but it was the application of that technology. Similarly here, the ICO has serious concerns regarding the processing activities and overall procedures of Clearview AI govern the data which is the issue, not the technology as a whole (although they are of course linked).

Given that the ICO have instructed Clearview to stop further processing of personal data of UK citizens, it will be very interesting to see if the representations of Clearview AI make the ICO alter their preliminary decision in any way. This will be one to keep an eye on in the next 6 months.  

 

[1] Clearview AI Website, Overview page – https://www.clearview.ai/overview

[2] Information Commissioner’s Office Website, ‘ICO issues provisional view to fine Clearview AI Inc over £17 million’ https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/

[3] EWCA Civ 1058 (11 August 2020) – https://www.bailii.org/ew/cases/EWCA/Civ/2020/1058.html

Share:

More Posts

Send Us A Message