Ethical Trade-Offs and the EU AI Act

Introduction

In this article I’m going to unpack and examine some ethical implications of the new EU AI Act. I’m sure the vast majority of you as readers of this article are becoming increasingly familiar with the Act and its role in data governance across a wide range of sectors and industries. Since you’re reading this article, you probably know that ethical dimensions of AI governance matter, you probably have some idea of why they matter, and the purpose of the reflections that I offer here is to help develop this.

Recently I was delighted to be invited to speak about ethical dimensions of the Act on esynergy’s Sunny Side of Tech podcast series. Some of what I said in that conversation is reflected in this article, but I’m using this article to build on that and interrogate a few key ethical issues in some more detail, and highlight in particular the inevitability of ethical trade-offs following as a consequence from however the Act might be designed.

First, I’ll very briefly summarise the Act, its aims and structure, and what some of the benefits are that might follow from its implementation. After that, I’ll focus on two trade-offs associated with the Act and go into some detail about why these deserve consideration with respect to their ethical implications. The two I’ve chosen here do not exhaust what is ethically salient about the Act – no doubt they barely scratches the surface; nevertheless, I hope that the analysis helps both to underline  why the Act’s ethical implications matter and how ethical, philosophical analysis of those implications is valuable and necessary for ensuring good governance.

Summarising the Act and its Benefits

The EU AI Act is the first international instrument introduced for AI governance. It stipulates exhaustive conditions under which different types of AI can and cannot be deployed. The Act came into effect in May 2024, followed by implementation phases of between six months and six years, according to different levels of risk associated with different kinds of AI (which is to say, the shortest implementation phase corresponds to AIs representing the highest level of risk). All organisations across the EU will be obliged to adhere to the terms of the Act in their deployment of AI.

The Act has several advantages. Given the rapid pace of technological advance in AI and its increasing proliferation across all areas of life, and in view of the potentially formidable power of some kinds of AI, the introduction of international legal instruments for governing their use is a timely development. In this respect, since the EU AI Act is the first such instrument, the EU is leading the way here and should be commended for doing so.

Crucially, as well as leading the way in stipulation of AI governance requirements in EU countries, the Act’s impact, and therefore its benefits, are likely to be felt elsewhere, beyond the EU’s boundaries, since any organisation that wishes to do business within the EU or in collaboration with any EU-based organisation will also be obliged to comply with the Act’s stipulations. This, in turn, means that, given the absence of prior international AI governance legislation, numerous organisations in non-EU jurisdictions are likely to adhere to the Act’s conditions, making those conditions a norm, at least for the time being.

One useful feature of the Act is that it stratifies different kinds of AI application into categories of varying risk. Assuming that the categorisation decisions are sensible and appropriate, this should help to distinguish between AIs and conditions of their deployment that carry a higher or lower risk of causing different kinds of harm to individuals and / or society. This, in turn, should help organisations to develop governance processes appropriate for ensuring the safety of the particular AIs and applications thereof that they want to use. The Act categorises risks associated with AI as follows;

  • Unacceptable Risk: This category covers AI systems posing risks that are prohibited under EU law, such as those that violate fundamental rights, manipulate behaviour in a deceptive manner, or enable social scoring for government surveillance purposes.

Example: An AI system used by a government to monitor citizens’ social media activity without their consent, leading to the suppression of dissenting opinions and violations of privacy rights.

  • High-Risk AI Systems: This category includes AI systems with potential significant risks to health, safety, or fundamental rights. These systems must adhere to strict requirements, including conformity assessment, data quality, and transparency obligations.

Example: A medical diagnostic AI system used to analyse X-ray images for identifying tumours. If the system produces incorrect diagnoses due to bias or inadequate training data, it could lead to misdiagnoses and harm to patients.

  • Limited-Risk AI Systems: This category comprises AI systems with specific requirements for transparency and compliance to ensure user trust and safety, but these requirements are less stringent than those for high-risk systems.

Example: An AI-powered virtual assistant used in customer service, which assists users with routine queries. While the system does not pose significant risks to health or safety, it must still provide transparent information about its capabilities and limitations to users.

  • Minimal-Risk AI Systems: This category includes AI systems posing minimal risks to health, safety, or fundamental rights. These systems are subject to general transparency obligations to inform users about their being AI applications.

Example: A language translation AI tool used for translating text between languages on a smartphone app. While it collects data to improve its performance, the risks associated with its use are minimal, and users are informed about its AI-driven functionality.

It is so obvious it barely needs stating that governance conditions should be appropriate for the power and risk of harm of the technologies for which governance is required. To this extent, identifying egregious harms which AI could cause or exacerbate as targets for prevention via governance is an ethical requirement. As such, categorising AIs and their applications in a way that enables an effective appraisal of relative and absolute risk is useful in the service of meeting that requirement (notwithstanding the pertinent question of how we can be sure that the Act really does make the right risk categorisations, which is to say, we have only the judgements of the Act’s designers to go on, which is a matter of relevance to the first trade-off that I’ll present).

Despite the considerable benefits that follow from the Act leading the way in international AI governance, via the various provisions included within the Act, these benefits do not come at zero cost; the Act creates trade-offs which have ethical consequences that it is important to be aware of, if the AI governance landscape is to be understood in sufficient detail. Next, we’ll turn to two of these, as I mentioned earlier, so we can better grasp, in the round, what is required for implementing the Act in a way that can optimise good AI governance.

The two issues that I’ve picked out to analyse and demonstrate the trade-offs that they create are: a potential misleading conflation of trustworthiness and risk acceptability of particular AI applications; and the possible entrenchment of power differentials when considering which organisations will be more or less able to comply with the Act’s stipulations.

Ethical Trade-Offs and Their Implications for Governance

  • Conflating Trustworthiness and Risk Acceptability

The first trade-off following from the benefits of the Act’s governance stipulations is a potential conflation of trustworthiness and risk deemed ‘acceptable’. I have written previously about the vital importance of focusing on achieving trustworthiness first and securing trust second. Organisations that are custodians of people’s data should not assume that they can or will in fact be trusted; rather, the demonstration of trustworthiness as custodians of data is necessary for being able to secure trust. Even though an individual might wish to use the services of an organisation which will need to collect their data for the service to be provided, as the body to whom data would be entrusted, it is the ethical obligation of the organisation as custodian to show that it can in fact be trusted.

For instance, let us assume that an organisation that uses AI in its data analysis can demonstrate that it is institutionally trustworthy, through transparency regarding the rigour in oversight of its data handling processes, and so on. In doing this, an organisation satisfies a vital ethical requirement, and this is important. Indeed, organisations that are more invested in doing their due diligence and ensuring that what they do is as trustworthy as possible are more likely to have processes that can and will also be trusted. Likewise, insofar as those responsible for developing the EU AI Act and its risk categories most likely are motivated by the need to protect individuals and society and prevent potential harm from AI use, we can have some degree of assurance that they have been able to distinguish between risks that must be eliminated completely and risks which must be managed by stipulations of governance that are appropriate for that task.

Nevertheless, and importantly, even if an organisation as a custodian of data can be trusted to handle that data as responsibly as possible, and even if the developers of the EU AI Act can be trusted to prioritise and balance relevant ethical values as well as possible in doing so, the trustworthiness of an AI as such is a separate question. Certainly, the less powerful the AI and the more limited and isolated its scope of application, the more it is likely to be possible to bring the trustworthiness of the organisation and the AI into alignment with each other, such that the trustworthiness of the former is more likely to imply the trustworthiness of the latter.

However, as the power of an AI increases, the extent to which we can assume such an alignment in trustworthiness diminishes, given that part of the organisational value  of AIs derives from their ability to exceed the analytic capabilities of the humans who designed them, for whatever task the AI is being used to carry out. And in turn, then, even if an organisation uses an AI with a risk profile deemed ‘acceptable’ by those who have devised the EU AI Act, a fundamental question about the extent to which an AI is in fact trustworthy, given the unpredictability on which an important component  of its value is predicated, is not necessarily satisfied simply because the organisation deploying it has shown itself to be trustworthy, all other things being equal.

This challenge, which might be logically intractable in the case of more powerful AIs, leaves a potential gap in governance that must be accounted for as best as possible. Since the value of such powerful techniques is predicated on their ability to operate beyond the horizon of human analytic ability, so we cannot necessarily assume that the AIs an organisation uses are themselves as trustworthy as the organisations. And nor can we assume that the criteria used to stratify particular AIs into risk categories in the EU AI Act are grounded in anything more fundamental than the best possible attempts of diligent, responsible human beings doing their best to create sufficiently reliable governance conditions for technologies whose value derives in part from operating in what is, necessarily, a regulatory blind spot to a greater or lesser extent.

With this in mind, it is very important to remember that even the most trustworthy organisations, observing the highest standards of compliance in data governance, to meet the stipulations of a legal instrument designed in good faith with the highest standards of analytic rigour, do not guarantee – although they might well effectively reduce the associated risks of – the fundamental trustworthiness of more powerful AIs that they might wish to use. And for this reason I highlight, again, as I have in previous articles, that the highest standards of data governance are not necessarily satisfied by compliance alone: any ethical risks associated with the stipulated conditions of compliance must also be identified and taken into account.

As such, while the Act is timely and represents a vital step forward in terms of meeting the ethical challenges raised by AIs, nevertheless its very creation produces several trade-offs which it is important to acknowledge, if we do in fact value good AI governance as much as we claim to. Next we move to the second of these that I’m going to present here.

  • Power Differentials and Justice

The second trade-off requires a bit of zooming out and is quasi-political in nature. It concerns the logical consequences of all organisations, whatever their size and resources, being obliged to follow the same set of rules for AI governance. Of course, if the Act stipulates what it should with respect to AI governance, then small organisations with fewer resources have just as much of a legal and moral obligation to comply with it as larger organisations with extensive resources. Nevertheless, the uniformity of the standards that must be met, irrespective of size and resources, has implications which have knock-on effects that matter.

One risk associated with ensuring that all organisations, whether large or small, and with limited or copious resources, comply with the Act’s conditions, is that a given organisation will find it more or less easy or difficult to do so, depending on the human and financial resources that it has available for being able to dedicate to the task. Therefore, even if an organisation were to find meeting the Act’s stipulations an unwelcome regulatory headache – for example because doing so restricts business activity, slows down growth, eats into its bottom line, and so on – nevertheless this going to be less of a headache for some organisations than it is for others.

Amazon, for instance, engages in widespread use of AI across all areas of its business activities – from prediction of customer preferences to the planning of the deliveries of the products it sells, via every relevant stage of the process in between. Given Amazon’s vast profits and human resources, it is likely to be well equipped to implement whatever governance conditions are stipulated by the Act and remain profitable and trading. However, a small independent retailer with ten employees which uses AI for some or all of the same processes is likely to find that the implementation of new conditions represents a greater threat to its commercial viability, because of the more limited resources available to it.

Although this is something of a simplification, it nevertheless points to a feature which it is important to recognise; namely, that the viability of large and highly profitable organisations is, in general, less likely to be threatened by the implementation of the Act than are SMEs. This has ethical implications with a political dimension, of which we should be aware.

In explaining why, it’s important to remember that AIs operate on data. This matters when we consider what it is that regulation should achieve, which, crucially, includes the prevention of harm to humans. AIs that operate on data about individuals can theoretically expose them to harm if they or their applications are not properly governed. Moreover, for an organisation to use someone’s data, it must have access to it, which in most cases is achieved by the individual who wants to use the organisation’s services having consented to it collecting their data. So far, so unproblematic. However, if implementation of the Act tends to favour larger and more powerful organisations, and in some cases will threaten the viability of smaller and less powerful organisations, then if the Act does in fact indirectly start to put SMEs out of business, this can create a power differential, the consequences of which are not trivial.

We are already increasingly aware of some degree of opacity about how data in general and our own data in particular is being used, given the baffling complexity of the ubiquitous data infrastructure in which we all now live. Even if we think we can keep track of which organisations we have and have not given permission to collect and use our data, in reality, we probably cannot. We are also already increasingly aware of the power that very large commercial organisations that have our data can wield. Particularly salient here are Google, Amazon, Apple, and Meta, each – or possibly all – of which holds significant information about us, which we have given to them freely, and which could, potentially, be used to undermine our interests, depending on what those interests are, what data is held, what the organisation wishes or is legally permitted to do with it, and how these relate to the aims of governments in states in which they operate.

The concentration of power in a few very large organisations that have harvested our data is one which should concern us. These organisations are so profitable that in many cases they are likely to be able to neutralise legal challenges brought by individuals or groups thereof, and their profitability is so intimately connected to the interests of governments that anything which entrenches and exacerbates this power differential will correspondingly undermine the ability of individuals to exercise their rights in cases where they come to harm through failures in data governance.

Of course, the purpose of the EU AI Act is precisely to ensure good governance, prevent failures thereof, and protect individual rights. Nevertheless, since the risk of failures of governance cannot be completely extinguished, then that feature of the Act’s implementation which is likely to favour already powerful organisations is a highly relevant consideration when it comes to having a clear-eyed and appropriately balanced appreciation of what is at stake in the Act’s implementation and the trade-offs that it involves, even if trade-offs such as this one are, ultimately, unavoidable.

Having sketched out these two ethical trade-offs and why they matter, next I’ll make some remarks by way of summarising how the trade-offs interrelate. The aim of doing this is to point at the complexity, depth, and seriousness of what is at stake, ethically speaking, in the implementation of the Act, even though we can overall judge the Act itself to be a timely and necessary regulatory development.

Final Remarks

Earlier in this article we thought about the importance of trustworthiness but also what limitations there might to it in practice. Insofar as the EU AI Act is highly likely to govern the conduct of organisations which hold and use our data, presumably we want to be able to trust the Act, to the extent that poor data governance might expose us to risks of various kinds. However, even the most diligent and thoughtful policymakers cannot completely avoid regulatory grey areas created by AIs or applications thereof which operate on the borders of the use conditions that it stipulates, and which might therefore be vulnerable to charges of arbitrariness. As such, we should be aware of the high bar that the policy should meet, given that the inevitability of such a possibility underlines the fundamentally conditional nature of trustworthiness in this context.

Connecting to this, we might remember the importance of separating the confidence that we might (or might not) have in the trustworthiness of AI governance policymakers and in the fundamental trustworthiness of the technologies that they seek to govern, given that such technologies might act in ways that we cannot predict. We wish to be able to trust that the EU AI Act is trustworthy in relation to how its creators have assessed different risk thresholds and devised appropriately stringent regulations. However, again, trustworthiness is conditional, since these assessments might turn out to be faulty or require revision. This too, again, underlines the extent of the responsibility involved in ensuring that the Act is, on balance, as ethically sound as it can be.

Finally, all of these concerns are of particular relevance when we think about the power of organisations that hold and use our data, even when we value their services, and when they make our lives easier. The Act, quite rightly, seeks to ensure that individuals and societies do not come to harm from different kinds of AI and their various applications. As such, the Act seeks to promote conditions of trustworthiness for any organisation which wants to use AI for analysing people’s data. Nevertheless, retaining some degree of circumspection about trustworthiness is vital. In practice, in the AI context it is always conditional, and the power wielded by larger organisations which hold our data in particular is likely to be shored up relative to smaller organisations, even if the compliance activities contribute to increasing their trustworthiness overall.

Having made these final summarising remarks, I’ll conclude very briefly.

Conclusions and How IGS Can Help

As I stated early on in this article, the two ethical trade-offs that I’ve picked out and unpacked here do not remotely exhaust what is ethically salient or relevant about the EU AI Act. The nature of AIs in general and increasingly powerful AIs in particular makes AI governance a formidable regulatory challenge that is only going to grow, rather than diminish.

Our view at IGS is that good governance is not exhausted by legal compliance alone, since the trajectory of technological advance in AI requires us to anticipate future risks which might at present be potential but could become actual in due course, and if they do what we ought to do in mitigation of them, given that these might not yet be captured by legislation.

Undoubtedly, the EU AI Act is a welcome and timely regulatory step, but for the reasons I’ve explained here and many others which space forbids us from considering, we should continue to scrutinise it nevertheless and pay attention to its practical implications and the trade-offs that its application entails. If your organisation’s activities require it to comply with the Act and you want to ensure that you can achieve the highest, most trustworthy standards of AI governance possible, at IGS we are well equipped to give you the support that you need to do so.

Share:

More Posts

Send Us A Message