EU AI Act: Should Expanded Ban on AI Practices in Article 5 be Adopted?

On 14 June 2023, the EU Artificial Intelligence (AI) Act – proposed by the European Commission – was approved by the European Parliament. As the world’s first comprehensive legislation on AI, this Act continues the trend of applying a risk-based approach. Using this approach, applications of AI are assigned to three different risk categories – minimal or no risk, high risk, and unacceptable risk. Depending on the level of risk from AI, obligations for providers and users will vary from minimal transparency requirements for limited-risk AI systems, to the strict lifecycle assessment of high-risk AI systems and a full ban on AI systems with unacceptable risks. Despite the European Parliament’s supportive stance on the risk-based approach, the list of uses case for AI where they consider it to be of unacceptable risks is quite long, particularly the expanded ban on AI for biometric identification and categorisation, emotion recognition, and predictive policing. These significant amendments have received noteworthy responses from the various stakeholders. This article will take a close look at these reactions and explore whether this expanded ban should be adopted from a data privacy perspective.

Legislative Context

In the proposed European Commission text, a list of AI prohibited practices is outlined under Article 5, in that their “use is considered unacceptable as contravening Union values, for instance by violating fundamental rights.”[1] This list encompasses four prohibited AI systems that “deploy subliminal techniques beyond a person’s consciousness”, “exploit any of the vulnerabilities of a specific group of persons”, evaluate or classify the trustworthiness of natural persons, and utilise “real-time” remote biometric identification systems in public spaces. Although acknowledging the necessity for prohibiting these certain systems, the European Parliament noted that these four prohibitions did not sufficiently tackle the range of AI practices which need to be prohibited due to widespread applications of AI. As a result, the Parliament expanded the list to include bans on intrusive and discriminatory uses of AI in the following list:

  • Remote biometric identification systems, including “real-time” ones in publicly accessible spaces, and “post” ones with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorisation;
  • Biometric categorisation systems using sensitive characteristics, e.g. gender, race, ethnicity, citizenship status, religion, and political orientation;
  • Predictive policing systems based on profiling, location or past criminal behaviour;
  • Emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.[2]

Is Opposition to Expanded Ban Justified?

This expanded ban on intrusive AI practices has received opposition, in particular from private AI companies. Tony Porter, Chief Privacy Officer at Corsight AI, stated the ban will “blindfold police officers, and hinder their fight against serious crimes like human trafficking.” Resonating with this so-called position of “don’t police the police”, he insisted on the validation to utilize the remote biometric identification systems, such as facial recognition technology, to tackle horrific crimes. Granted that a mass identification system could help police target a crime suspect within a few hours, it could also identify thousand of innocent individuals in the same area who are not notified of police observation. According to Article 2(2)(d) of the UK General Data Protection Regulation (GDPR), the purpose of processing suspect data by police could fall into the range of “the purposes of the prevention, investigation, detection or prosecution of criminal offences”, and thus such kind of processing will not be regulated by the UK GDPR but the Data Protection Act 2018. However, we need to consider the innocent individuals and balance their rights under the UK GDPR. Without notifying these individuals and gaining their consent, this large-scale processing will not only lack the lawful basis, but also impair the principle of data minimization and purpose limitation. The reasoning behind this opposition seems to neglect the invasion of privacy and non-compliance with the UK GDPR while prioritizing the potential to fight crimes, though the latter is not guaranteed.

Rationale behind Expanded Ban

Despite opposition from private companies, this expanded ban has been welcomed by the majority from NGOs to research institutions. From their perspectives, these intrusive forms of AI, which were originally classified as “high-risk” in Annex III of the proposed European Commission text, have exacerbated racism and discrimination, infringed fundamental rights, and reinforced societal inequality. As a result, re-categorising them into “prohibited AI practices” could adequately mitigate these unacceptable risks. This article would support this general opinion, and provide a more detailed insight into the rationale behind this ban on three main AI systems – biometric identification and categorisation, emotion recognition, and predictive policing.

Starting from remote biometric identification, it could be defined as the use of AI to recognise human features in public spaces – such as faces, voice, keystrokes and other biometric signals – via automated ways. The original ban only applies to “the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement”. The expanded list, firstly, removes the limitation to law enforcement uses in the “real-time” settings. This limitation omits the fact that the use of identification system for other purposes such as private security poses the same profound threat to protection of personal data and privacy. Lifting this limitation allows for a general ban on any use of AI for automated recognition of human features in publicly accessible spaces. Secondly, the Parliament adds the “post” remote biometric identification to the ban list. If people realise that these systems are implemented in public spaces through which their biometric data could be recorded and stored for future cases, they will lose their reasonable expectation of privacy in public spaces. For example, in the context of a political protest, post remote biometric identification “is likely to have a significant chilling effect on the exercise of the fundamental rights and freedoms, such as freedom of assembly and association and more in general the founding principles of democracy.”[3] With this new restriction in place, people’s expectation of general anonymity in public could be boosted. On the other side, a ban on AI systems categorising individuals from biometric data into specific clusters resonates with the UK GDPR’s prohibition on the processing of special categories of personal data.

Returning to emotion recognition systems, they refer to using AI to make inferences about someone’s emotional state from data collected about that person, such as facial images or voice recordings. Two main issues are likely to motivate the European Parliament to extend the ban on emotion recognition. The first one is the doubt about whether current emotion recognition systems could actually do what they claim. A study to assess the evidence for inferring emotional states from facial configurations, conducted by Lisa Feldman Barret, summarised that “the science of emotion is ill-equipped to support any of initiatives” to “figure out how to objectively ‘read’ emotions in people by detecting their presumed facial expressions.”[4] Same concerns are expressed by Paul Eckman who is a pioneer in the study of emotions and their relation to facial expressions, as well as the International Biometrics + Identity Association, a leading voice for the biometrics and identity technology. Both of them state that emotion recognition applications are pseudoscience or unscientific. Due to this discrepancy, a blanket ban on emotion recognition systems seems to be legitimised. Secondly, the violation of a range of human rights, in particular the right to privacy and the right to non-discrimination, further bolsters the European Parliament’s decision on this ban. If border management utilises emotion recognition to identify potentially aggressive people in crowds, this intrusive technology not only breaks transparency and lawfulness obligations of the UK GDPR without consent from people whose data have been analysed, but also could result in discriminatory effects on already racialised groups. A study carried out by the University of Maryland has confirmed that emotion recognition systems assign more negative emotions to ethnic minorities.[5]

Finally, the similar discriminatory impacts on ethnic minorities could also explain why predictive policing systems are added to the ban list. Such AI systems are normally deployed by law enforcement authorities to predict the occurrence of potential criminal offences based on the profiling of a natural person or past criminal behaviour. The use of these systems has been proved to “reproduce and reinforce existing discrimination, and already results in Black people, Roma and other minorities ethnic people being disproportionately stopped and searched, arrested, detained and imprisoned across Europe.” It also needs to be noted that, under the UK GDPR, a natural person has the right not to be solely subject to automated profiling, and the processing of criminal data requires a higher level of protection measures. A prohibition on predictive policing would ensure the compatibility of this AI Act with data protection laws.


By expanding the list of prohibited AI practices, the European Parliament has closed some loopholes in this proposed law, which will provide a framework with stricter regulation of intrusive applications of AI and significantly improve the protection of data privacy. However, due to the fast evolution of AI, this expanded ban list could still not cover the whole range of risky AI systems emerging in the future. As a result, how to keep pace with technological development and set a mechanism to update this list becomes another challenge in the final negotiation between the European Parliament and the European Council.

[1] European Commission. Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Accessed 04 July 2023.

[2] European Parliament. MEPs ready to negotiate first-ever rules for safe and transparent AI. Accessed 04 July 2023.

[3] Access Now Europe. Access Now’s submission to the European Commission’s adoption consultation on the Artificial Intelligence Act. Accessed 04 July 2023.

[4] Lisa Feldman Barrett. Emotional Expression Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Accessed 04 July 2023.

[5] Lauren Rhue. Racial Influence on Automated Perceptions of Emotions. Accessed 04 July 2023.


More Posts

Data Ethics is Business Ethics

Data ethics as a distinct area of deliberation is growing rapidly and has numerous subfields, such as ethics in machine learning; AI ethics; ethics of

Ethics and the presumption of data reuse

Data (platforms) and widening the presumption of data reuse The rationale underlying big data-driven healthcare, research, and commerce is that linkage and integration of datasets

Send Us A Message