Data and AI Ethics at IGS in 2024: End of Year Roundup and Looking Ahead to 2025.

Introduction

As this is the final ethics Insight Article of 2024, I’m going to give a round-up of the past year in data ethics at IGS, as we look ahead to 2025 and consider what the next year might hold.

In doing all of that, I’m also going to report on an event at which I recently spoke – the second Ethos Ethics and Compliance Charity Symposium, at the Royal Society of Medicine – which I think exemplifies some of the wider trends that I’ve noticed in our area over the year, and which I’ll expand on over the course of the article.

By way of a brief summary, what has been most notable in 2024, is that more or less all of our work in data ethics has been specifically within the subset which we can call AI ethics, or what is also referred to as ‘responsible AI’. And what’s more, as we look forward to the work that we have scheduled for 2025, it seems very much that this is a trend which will continue for the time being.

So, given where we stand at this moment, what might we conclude from the path we’re on?

Are we in an ‘AI bubble?’

An understandable response to the bewilderingly rapid proliferation of AI – and concerns about it – across most areas of our lives, from healthcare to retail to banking to defence and so on and so on, is that it can give the disconcerting sense of it being a ‘bubble’. But what does this mean, and why does it matter?

A ‘bubble’ occurs when interest and investment in an asset or a market starts to grow in a way that becomes unstable and unsustainable,  because it becomes increasingly unclear whether the growing magnetism of the thing in question  is grounded in a proper appreciation of its value, or whether it just appears to be something to which we should value because of the increasing amount of money that is already flowing towards it. Eventually, it becomes impossible to know which is the case, at which point, the balance of risk from continuing to invest tips, and this undermines the structural integrity of the bubble. And, of course, what happens to bubbles eventually when they become too large and unstable? They burst, hence the analogy.

We are all familiar with references to a ‘property bubble’ every now and again when the housing market becomes overheated and property prices become unsustainable; and those of us old enough to remember the early 2000s will remember the ‘dotcom bubble’, in which the rapid acceleration in widespread consumer adoption of the internet fueled a rush on investment in, increasingly, any company which could buy a ‘.com’ suffix and operate online. Eventually, this bubble burst too, as the market became over-valued, so many businesses collapsed and many investors lost a lot of money.

This should all give us cause for concern. Certainly, AI does seem to have acquired the characteristics of a bubble, and it’s hard to escape the sense that the enormous hype around it is starting to obscure a clear-eyed assessment of: how we should respond to the embedding of AI in everyday life; what the significance of that is; what we should and shouldn’t expect, be worried about and so on. Indeed, there are already noises off describing AI as a bubble, so this analysis might come as no surprise.

However, even if AI is a bubble, it doesn’t follow that this is the end point of the analysis. Or, to remain consistent with the analogy, it doesn’t follow that when the bubble bursts, AI will have somehow just disappeared and the world will move onto valuing something else instead. In this sense the analogy is misleading. After all, when a property bubble bursts, it doesn’t mean that the buying and selling of property just stops. And closer to the AI case, just because the dotcom bubble eventually burst, this did not mean that the internet went away or ceased attracting users or investment. Indeed, testimony of investors who lost money when the dotcom bubble burst indicates that despite the eventual bursting of the bubble, the rush of investment which fuelled it also laid down the permanent technical infrastructure required for permanent consumer uptake of the internet over the longer term. So, even if something analogous is occurring with AI and a backlash or correction of sorts comes over the horizon, it does not follow from this that AI is going anywhere. Indeed, there are reasons to think it’s here to stay. Let’s examine this.

The end of AI winters 

In the 80 or so years since the theoretical possibility of artificial, machine-based, intelligence was conceptualised, there have been a couple of false starts, despite the excitement of what it might offer if it were realised. Following Turing’s postulation of a test which would constitute the necessary standard of proof – namely, whether a human conversing from one side of an opaque screen with either another human or an AI would be able to tell from the responses which of the two is on the other side – there was a great deal of excitement about what might be achievable quite quickly in AI.

This was also driven by dual developments in computing technology after WWII and a significant trend in philosophy of mathematics at the time – associated with figures such as Gottlob Frege, Bertrand Russell, William van Ormond Quine, Rudolf Carnap, Ludwig Wittgenstein, and others – which sought to establish systematically the relation between logic and language. These complementary innovations in technology and philosophy helped to lay down important physical and theoretical rudiments of computing as we have come to know it.

So, initially there was much excitement about AI and what the near future could look like. Unfortunately, however, this excitement, and the bursts of progress which generated it, was hampered at different points from the 1960s to 1990s. In general, the limitations which arose were: a lack of practically available computing power at the time for doing the experiments required to advance towards the goal, and; a subsequent drying up of investment and research funds once it became clear that AI research had hit, for however long, a dead end. The periods of stagnation that these limitations created were known as ‘AI winters’.

However, eventually, the situation changed. Around twenty years ago an approach to AI began to emerge which focused on more modest goals than the dramatically futurist vision of a truly ‘thinking machine’, a vision which we might know now as, more or less, ‘Artificial General Intelligence’, and what was hitherto, perhaps, the primary association that one used to tend to make. Although AI is now proliferating rapidly, the point at which AGI might be realised is still unclear. The reason why AI as we know it is proliferating so rapidly, however, is because of the new approach that emerged, which we tend to refer to as Machine Learning (ML).

The ML approach is characterised by more narrowly focused applications of AI. These applications are what have come to underpin many of the embedded AI systems with which we now interact on a daily basis. Although the aim of these applications is, understandably, to go beyond human analytic capacity and extend what humans can do in whatever field the technology is applied, and ‘intelligent’ to the extent that they can do this and learn in a tightly defined way, they cannot think. They do not, in short, resemble the grand vision of an AGI which characterised the inception of the field.

It is the incredible utility of the ML approach which has led, with astonishing rapidity in recent years, to the ubiquity of AI across all areas of our lives, in almost any technological system with which we interact. The essential point to be made here is that AI has become a general use technology. It’s because of its general usefulness that it is unlikely there will be any more AI winters, since reasons for investment are unlikely to dry up now that AI underpins so many commercial systems.

To reinforce this point, it’s also interesting to think about instances where advanced and futuristic technologies did not, in fact, establish a path to a future influenced significantly by a game-changing general use technology. A good example here is Concorde.

Concorde

Even though it went out of service over twenty years ago, in 2003, Concorde still seems like a futuristic innovation, and it must have seemed impossibly so when it was introduced in 196. The idea of breaking the sound barrier in a commercial plane and getting from London to New York in under four hours remains an exciting and appealing idea.

As with the early years of investment into AI, there were antecedent socio-political circumstances which gave rise to the technology.

Concorde was made possible and realised partly because of the Cold War ‘space race’, in which, for reasons of demonstrating technological – including military technological – muscle and power, the Soviet Union and the West competed to create the first supersonic commercial airliner. In the end, despite the contemporaneous development of the Soviet Tu-144, it was Concorde which ultimately prevailed in this competition, and it remained in service for over thirty years, carrying with it associations of luxury, exclusivity, and high rolling throughout.

But if it was, and still is, such a remarkable innovation, why did it not spark an entire general market in supersonic commercial air travel? Well, there are several reasons, including but not limited to these.

First, while there may have been some benefit for some categories of customers, mostly business-related in different ways, to get from London or Paris to New York and back in a significantly shorter time, the benefit was not sufficiently large for the average user for business or pleasure, and especially not at the extremely high cost. This disparity became amplified as commercial air travel expanded and the cost continued to come down.

Second, and relatedly, as commercial air travel became more comfortable, the cramped environment of Concorde became, over time, less appealing relative to the lower cost of slower routes on more spacious airliners.

Third, the amount of routes for which supersonic travel makes any appreciable difference is very limited. There is really no need for supersonic travel over short distances, and this constituted a natural limitation on the commercial viability of producing such airliners.

Fourth, and relatedly, the only route on which the vast costs of running Concorde yielded a profit was between London (or Paris) and New York, which undermined its reach and cost-effectiveness.

So, for these and other reasons – the final, arguably, being the Air France disaster which occurred in 2000 and led to 113 deaths – despite it being radically innovative, Concorde had a natural lifespan. It might be that supersonic commercial air travel returns, and for good, but the lesson from this is that this will only happen if it can become a technology which is generally useful, rather than incredibly exciting but useful only to a relatively group of consumers. And because of that, we can see why AI is now unlikely to retreat, because it has achieved the utility and cost-effectiveness required to be a general use technology.

Finally, here, I refer again to the power of geopolitical interests for supporting innovation as a relevant factor, and in the context of which AI has general use features which Concorde did not. Competing state-level interests are increasingly fuelling AI investment, across trade, military technology, national security, and so on, which further entrenches AI systems as features of the technology infrastructure across every area of personal and professional life, and which are by now highly unlikely to be unpicked or become redundant.

At this juncture, we’ll turn to some reflections on the recent Ethos charity symposium, the topic for which this year was AI ethics.

Ethos Charity Symposium 2024

The organisation Ethos Ethics and Compliance, which specialises in delivering ethics and compliance training in the pharmaceutical industry, convened its second annual Charity Symposium at the Royal Society of Medicine earlier this month.

Pharma is, of course, an area in which AI has  evident and pervasive relevance, since an important area of application for ML techniques is health research, for example in the analysis of: personal health data to make disease risk and prognosis predictions; clinical trials data to identify cohorts of patients in whom a drug will be most effective; chemical compounds to predict which molecules will be the most effective targets for developing better personalised medicine regimes, and so on. Given the importance of health, insofar as AI can be used to yield advanced in health outcomes through more effective therapies, there is a clear value to the application of AI here.

I said at the start that over 2024, more or less all of IGS’ data ethics work has been within that subset of the field which we can call AI ethics. And the invitation from Ethos was no exception. It was clear from speaking to Dr. Nick Broughton, Ethos’ Director, that within pharma, as everywhere, questions abound about how AI should and shouldn’t be used and for what reasons, and what risks that should be mitigated should we anticipate coming down the line in years to come, and so on. As such, it seemed timely for the topic of this year’s Symposium to be about ‘Writing the Rulebook’ for AI ethics in the pharma context.

Nick and the Ethos team put together a superbly informative programme for the day, drawing on valuable perspectives from primary care, pharma research, tech, law, comms and investment, each of which gave their own take on AI in pharma and what they take the ethical challenges associated with it to be. As such, I’d like not only to thank Nick, but also Heather Murray (AI for Non-Techies), Prof. Marc Kitten (Candesic), Thomas Balkizas (Microsoft), Piers Clayden (Clayden Law), Keith Grimes (Curistica), and Sheuli Porkess (Precisia C2-AI) for everything that I learnt on the day!

What was common to all of the presentations and kept emerging throughout the day was that, consistent with what I outlined in the earlier sections of this article, AI is, whether we like it or not (and some people, indeed, might not) here to stay, given that it has become a general use technology.

Since AI is  here to stay, and since we are all increasingly aware of its  potential power and the risks posed by its inherent unpredictability, anyone working in an AI-driven industry must increasingly turn their attention to considering: how it should be governed; for what reasons; and how the risks can be managed in such a way that the potentially realisable benefits, are indeed realised for the people who need them.

Given the central theme of AI ethics in this year’s symposium, I gave a talk focusing squarely on a selection of five significant moral dilemmas for ensuring ethical AI governance in pharma: the black box problem; the relation between ethics and law; (informed) consent; balancing privacy and surveillance; and health justice.Drawing on my formal doctoral and postdoctoral training in philosophy and applied ethics within this and related fields, and my post-academic experience at IGS, I gave a non-exhaustive, but nevertheless, I hope, substantial, account of what the relevant ethical challenges here are, why they matter so much, and how we might start thinking about how to resolve them in our AI governance structures.

Underlining the indispensability of philosophy (again!)

A key feature of the approach I recommended pointed to the practical value of skills in philosophical analysis in particular. I’ve written several articles here already articulating why philosophy, far from being a purely abstract discipline untethered to any practical matters of importance, is profoundly valuable for deciding what to do in real-world situations and is a massively transferrable skill in any area you care to think of. So, it was perhaps unsurprising that I defended that position in this talk as well, but I did so because I think it’s true.

Anyone could be forgiven for having the misapprehension about philosophy that I just highlighted. After all, what value does a Hegelian dialectic, which at the end of time resolves all opposing views to reveal and comprehend the absolute idea behind all appearances,, have for ensuring ethical AI governance in practice? Or, how are moral dilemmas about AI use in pharma resolved by Spinoza’s monist argument that all that exists is one substance, which is both God and nature, such that we are not really separate persons, but physically extended ideas in the mind of God, or nature? As interesting as the ancient dispute is between Heraclitus’ claim that the fundamental defining feature of reality is change, and Parmenides’ claim that the fundamental defining feature of reality is no change, how on Earth is it going to help us figure out what we should and shouldn’t do with AI in drug development?

These are all fair challenges, but the value of a philosophical approach comes from elsewhere. The innovations in philosophy of language and mathematics in the 19th and 20th Centuries (and, which, as I said, went on to enable the development of computing, and by extension AI), pioneered by Frege, Russell, Carnap, Quine, Wittgenstein, and so on, and which I mentioned earlier had the effect of establishing formal logic and the close analysis of language for truth, falsehood, validity, soundness, fallacies, and so on as a basic and central area of philosophy, alongside metaphysics, epistemology, and ethics. It might be best described as an approach to philosophical thinking and is often referred to by the shorthand of ‘analytic philosophy’. Its establishment has meant that since the mid-20th Century, if you study philosophy at more or less any university, you will develop a set of skills that is extraordinarily valuable in the AI context.

With analytic tools that specifically equip you to: spot errors in reasoning; effectively question assumptions; discern faulty premises and errant conclusions; identify ambiguities in meaning; tell the difference between a reason and an opinion; and so on, you are ideally placed to start thinking robustly about how AI should and should not be deployed and why.

Rules of deployment in governance require policies, and policies are composed from words. Policies need to be, among other things: clear, rational, unambiguous, robust, coherent, fair, dependable, applicable in practice, unlikely to lead to risks which could have otherwise been anticipated had they been thought through more skilfully, and so on. Poor policies create risks, and the risks posed by AI can be significant. By contrast, good policies mitigate risks and help to realise benefits, they promote trust, they are reliable, and they can conduce to the kinds of public goods – such as in health, for example – that we all wish to see and in which we all have a stake.

I’ll leave it there, but I hope this illustrates, once again, why training in philosophical analysis is so valuable in the context of the development of policies for a rapidly growing area that is still in its infancy but won’t be for long, and which wields such far-reaching power. As a set of skills for ensuring optimally robust AI governance, it isn’t something that’s nice but optional, but as indispensable as the knowledge of the law required for ensuring regulatory compliance. It’s only the combination of these two bodies of knowledge that can give you the kind of comprehensive, most watertight, most effective, dynamic, responsive and forward-looking approach to governance that all organisations, whether in pharma or elsewhere, will need as AI continues to proliferate across our lives.

Closing remarks

As ever, it’s for the reasons I’ve outlined here that if your organisation is using AI – and it probably is, given our view from here over 2024 – and you need to ensure the highest standards of governance, at whatever stage of development your organisation is at, then at IGS we are here with the skills to help that you need.

It has been an exciting year and it looks so far that the AI trend is not going anywhere in 2025, even if ‘the bubble bursts’. At IGS we’re looking forward to the coming year and we hope you are too. Until then, we hope you all enjoy a break over Christmas, we wish you a happy 2025 and hope to hear from you in the new year!

Share:

More Posts

Send Us A Message