Computer says no: losing our protection from automated profiling?

Artificially intelligent systems are everywhere and make decisions about us, with and without our knowledge, every single day. The content that is presented to you when you decide to open YouTube or Facebook; the products that you see advertised as you browse the internet; the route that is given to you when putting a destination into Google maps; or the top restaurant that is offered by Siri or Alexa if you ask for a recommendation. All these decisions about what you might want to see, or do, or buy, are automated. That is to say, not made by a person but calculated by (usually) a machine learning algorithmic system – which learns to tailor that service to the individual user based on any number of factors. (A concise technical definition of such systems can be found here).

Most people would, whether rightly or wrongly, probably view the kinds of decisions listed above as fairly innocuous, and likely have no particular problem with the concept of these being made by an automated system. However, algorithms and machine learning systems are increasingly used in situations that can have much more serious consequences for people’s lives. Take for example, the decision about what level of healthcare support you should receive; whether you will be successful when applying for a loan or a bank account; or whether you are viewed as being at high risk of committing a crime.

Such significant evaluations being made by automated systems can be problematic for a host of reasons. Primarily, because machine learning systems can make decisions rapidly in extremely complicated and subtle ways involving thousands or even millions of data points. These processes will often not be obvious or even knowable to humans – even those humans who create the systems in the first place, and this clearly is at odds with the vital principles of transparency and fairness set out in data protection law.

Another, and probably the most easily identifiable problem with automated decision-making systems, is the risk of inherent bias or prejudice – and you do not have to look far to find examples of this. In 2020, the UK Government infamously employed an algorithm to decide which grades A-Level students should be awarded after they were not able to sit exams due to the pandemic. Almost 40% of students received grades that were lower than anticipated, and there was further outcry over the fact that the proportion of top grades awarded to private fee paying schools rose by more than double the rate for state comprehensive schools – exacerbating existing inequalities.

The point being made is that, in our opinion, it is very important to have some codified legal protection against major decisions that might be made about us by opaque, potentially flawed and sometimes unexplainable automated systems. Article 22 of the General Data Protection Regulation, whilst not perfect in terms of clarity, gives us this protection – a right not to be subject to solely automated decisions which produce legal or similarly significant effects, as well as the right to human intervention.

It is concerning then, that the UK Government is now proposing to remove Article 22 altogether, as detailed in a recent consultation paper. (This paper was discussed more broadly in a previous IGS Insights piece, here). The Government’s primary given reason for this change is the goal of encouraging technological innovation. It is a worthy aim – artificial intelligence/machine learning systems have incredible problem-solving potential, and have already produced amazing results in applications from medical diagnoses to climate change. However, isolated from legal safeguards and the balancing of data subjects’ rights, this proposed change would heavily favour large technologically advanced data controllers and further skew the power imbalance between them and their customers/users.

The Information Commissioner’s Office has expressed a similar opinion, stating: “resolving the complexity by simply removing the right to human review is not, in our view, in people’s interests and is likely to reduce trust in the use of AI.” Unsurprisingly, privacy and digital rights groups are also strongly opposed.

We hope that the Government registers the concerns raised in regard to this proposal and the consultation paper generally, and diverts from the apparent course of prioritising technological innovation above all else. Allowing data driven technology to utilise its full potential whilst also safeguarding the rights and freedoms of data subjects, is surely the balance that the law should strive to maintain.

Share:

More Posts

Send Us A Message