Data and AI Ethics 2025: An IGS Roundup

As 2025 draws to a close, it’s a natural moment to pause and reflect on a year that has been transformative, for the data and AI ethics landscape, for the organisations navigating it, and therefore for us at IGS. Across consultancy, training, articles, webinars, panels, and collaborative initiatives, a wide range of important ethical themes in data and AI have defined my work, and thus that of IGS’ leadership in ethical data and AI governance, over the course of 2025. 

In what follows, I’ll start by giving a case study from each of our data and AI ethics 2025 consultancy and training services. Having done that, I’ll provide a digest of IGS’ thought leadership in ethical data and AI governance, as represented in the newsletter articles and webinars, followed by a summary of the public engagement activities and industry / academia collaborations which I’ve led for IGS in this area. Before wrapping up I’ll give a brief update on my ongoing activities bridging industry and academia, and I’ll conclude by making a short statement about what, I predict, anyone who works in the data and AI landscape should expect in 2026. 

All of this should, I hope, give you a ground level view of what has arisen as most ethically salient and pressing within the data and AI governance over the course of the year and what the immediate future looks like, building on last year’s roundup where we reviewed 2024 and looked ahead to data and AI ethics 2025. 

Before kicking off with case studies from our consultancy and training work and proceeding to cover the numerous other activities that I’ve led for IGS in its data and AI ethics services, it’s worth giving a headline at this juncture. Taken together, this work offers a grounded view of the most pressing questions shaping Data and AI ethics 2025. Crucially, all of the work that we have done this year has been unified by a central conviction; namely, that ethics is not an optional feature of optimal governance. Rather, it is the backbone of trustworthy innovation and will be to any data and / or AI-driven organisation’s advantage to treat is as such. 

So, let’s begin. 

Consultancy and Training 

Consultancy 

Perhaps the most exciting development of 2025 in our ethics consultancy was taking on eAltra as a client. What makes this collaboration so compelling is not only the ambition of the technology, but the clarity of eAltra’s values. eAltra is building an AI-driven triage and support platform for people living with cancer, starting from a simple and powerful premise; namely, that patients deserve clarity, choice, and confidence throughout their care journey. We’re all aware that health systems are these days under huge strain and that patients are increasingly expected to navigate complex pathways themselves. Equally, giving patients as much autonomy and control over their care decisions as they would like is an important ethical goal. In trying to create a means for achieving a balance between these, eAltra’s patient-centred approach is timely and necessary. 

What, in our view, sets eAltra apart is its explicit commitment to ethics as a design principle rather than a compliance afterthought. Patient control, meaningful consent, transparency, and security are embedded into the platform’s vision from the outset. The aim of the platform is not to replace clinicians, but to enhance care by ensuring that the right information reaches the right people at the right time, without undermining trust. From an ethics and governance perspective, this is exactly the kind of responsible innovation that demonstrates how data and AI can be used to empower people rather than increase risk.  

eAltra’s potential is significant, and this was recognised in their winning in the Best Emerging AI category of the Spark Crowdfunding Top 100 Most Ambitious Companies in Ireland award earlier in the year. eAltra points toward a future in which shared care records are a right, patient agency is strengthened, and smart data is used compassionately to improve care and health outcomes. Supporting that journey has been, and continues to be, both professionally rewarding and ethically meaningful for us at IGS. 

The other notable new consultancy development this year has been the invitation to join leading neurotechnology consultancy Cerebralink as an expert associate consultant, in which capacity I look forward to delivering subcontracted on behalf of IGS for Cerebralink clients requiring expertise in the ethical analysis of emerging innovations in neurotechnology and neuroscience. 

Training 

2025 saw IGS delivering valuable training in the ethical demands and dimensions of data and AI governance across a range of sectors. Two notable examples to demonstrate the breadth of this range were, at one end of the spectrum: our work delivering training in responsible research and innovation relating to AI use in chemistry and chemical engineering, for a prestigious UK university funded by one of the UK research councils; and at the other, continuing with developing AI ethics education seminars in orthopaedic care and research, the first of which in the series is publicly available online here, for the UK’s leading orthopaedics charity, Orthopaedic Research UK

To give an idea of just how broad a range of topics in ethical data and AI governance we delivered in training, just these two examples along ranged from: data quality in musculoskeletal medicine, taking in the risk of bias and insufficient diversity of representation in datasets, the importance of informed consent in the AI context, and the wider societal harms that follow from poor health data governance; to examples of the opportunities and risks of AI use for applications such as molecule discovery, material design, reaction pathway prediction, sustainability and green chemistry, and digital twin systems for chemical engineering; and how to manage IP in chemistry and chemical engineering in an AI-driven research, commercial, and industrial context. 

It has been a thrilling experience to learn so much in the design and delivery of our training in data and AI ethics, responsible research and innovation across such a wide scope of sectors and applications, and I, along with all of IGS look forward to building on this in 2026. 

Thought Leadership 

Ethics in Practice: Data and AI Newsletter 

In 2025, IGS launched Ethics in Practice: Data and AI, a fortnightly newsletter published on LinkedIn that has since become a core platform for our thought leadership in data and AI governance. The newsletter replaced our earlier website-based ethics insight articles and quickly gained traction, attracting 956 subscribers across a wide range of sectors. Across the year, the series explored how ethical reasoning can be applied practically to some of the most pressing governance challenges emerging from rapid technological change. 

A central theme running throughout the newsletter has been the argument that ethics is not an abstract, optional add-on to governance, but one of its most powerful tools. Across a huge range of sectors, including insurance, finance, health care and research, energy, and technology, regulatory frameworks are often outpaced by what AI systems can technically achieve. Repeatedly, the newsletter made the case that applied ethics helps close the gap between what organisations can do and what they should do, strengthening accountability, trust, and long-term resilience rather than competing with compliance. 

Several editions focused on re-centring data ethics amid growing saturation of ‘responsible AI’ discourse. As attention increasingly shifts to model-level risks and AI safety, the newsletter argued that organisations risk neglecting foundational ethical questions about data quality, fairness, provenance, and governance. Throughout 2025, I emphasised that no AI system can exceed the ethical integrity of the data on which it depends, making ethical data governance the bedrock of any credible AI strategy. 

Health data governance formed another major strand of the series, particularly in relation to precision medicine and genomics. These pieces highlighted the ethical distinctiveness of genomic data and the importance of patient voices, meaningful consent, and fair representation in healthcare and research. Across this work, IGS consistently argued for governance approaches that go beyond compliance alone to foster transparency, trustworthiness, and patient empowerment. 

The newsletter also tackled emerging regulatory challenges, including ethical blind spots in the EU AI Act. As implementation progresses, articles explored areas where individuals and society remain exposed, from general-purpose AI to biometric systems and workplace monitoring. Rather than treating regulation as the end point, the series argued for anticipatory governance that embeds ethical safeguards proactively to protect dignity, fairness, and public trust. 

Recurring themes included the limitations of contemporary consent models, the ethical importance of data confidentiality, and the need to cut through the growing noise surrounding AI ethics. In response to widespread confusion and hype, IGS complemented the newsletter with accessible public education, including a concise AI ethics crash course, reinforcing the need for governance grounded equally in principle and practice. 

Later editions expanded the scope further, examining the ethical foundations of the green transition, the growing strategic value of ethics in corporate governance, and the profound governance challenges posed by emerging neurotechnologies such as digital twins of the brain. Together, these pieces reinforced a consistent message: ethical governance is becoming a core driver of innovation, legitimacy, and trust in a rapidly evolving data and AI landscape. 

Finally, I’d like to give thanks to occasional expert contributions and commentary provided by expert colleagues elsewhere in several of these newsletter articles – Dr. Simon Jenkins at University of WarwickDr. David Lawrence at Durham UniversityDr. David Lyreskog at University of Oxford, and Nick Meade at Genetic Alliance UK

IGS Webinar Series 

In 2025, IGS launched a public-facing LinkedIn Live webinar series aimed at making data and AI ethics practical, accessible, and directly applicable to real-world governance challenges. Each session focused on areas where organisations are increasingly required to make defensible decisions amid regulatory ambiguity, technological complexity, and growing public scrutiny. 

The series opened with Data Ethics: What guides you when the law can’t?, which examined the foundational role of ethics in contemporary data governance. As data- and AI-driven capabilities advance rapidly, regulation often lags behind, leaving organisations exposed even when they are technically compliant with frameworks such as UK GDPR. This session explored why compliance alone is no longer sufficient and how ethical reasoning can help identify and manage governance risks that fall outside clearly defined legal requirements. Using real-world case studies, I demonstrated how ethical analysis can be applied in practice to support decision-making and justify choices under conditions of uncertainty. 

The second webinar, Is your AI trustworthy?, focused on AI ethics in a landscape crowded with hype, buzzwords, and competing narratives about ‘responsible AI’. The aim was to cut through this noise and clarify the core principles organisations need to govern AI systems responsibly. I outlined the foundations of AI ethics—fairness, accountability, transparency, and human oversight—and showed how these can be embedded into governance processes in ways that complement regulatory compliance and strengthen trust. 

The third webinar, How can we govern genomic data fairly and effectively?, brought a health-specific lens to ethical governance. I was joined by Nick Meade, CEO of Genetic Alliance UK, the leading advocacy organisation for people affected by genetic, rare, and undiagnosed conditions. Together, we explored why genomic data is ethically distinctive and why its governance must extend beyond standard data protection approaches. As genomic data becomes increasingly central to healthcare and research, our discussion focused on the challenges of consent, equity, accountability, and public trust, and on what is needed to ensure that innovation in genomic medicine benefits those who need it most. 

Public Engagement 

2025 saw a marked intensification of public debate around data and AI ethics, driven by the rapid expansion of AI systems into everyday life and a series of high-profile controversies. These ranged from concerns about serious harm caused by poorly governed AI chatbots, to questions about privacy, autonomy, and identity raised by brain–computer interfaces, alongside growing calls for improved ethical literacy in AI development and use. Against this backdrop, IGS remained actively engaged in public-facing conversations that bridged ethics, law, technology, and policy. 

Throughout the year, on behalf of IGS I contributed to this discourse through invited talks, panels, and masterclasses across academic, professional, and policy-facing settings. Highlights included speaking at the University of Oxford Centre for the Creative Brain on the ethical governance of brain–computer interfaces and bioelectronic technologies, where discussion focused on the blurred boundaries between therapy and enhancement and the importance of embedding ethics into emerging neurotechnology governance from the outset. I was also invited by the Life Science Access Academy to deliver a masterclass on healthcare ethics in an AI and big data context, exploring how organisations can ensure fairness, accountability, and transparency in data-driven healthcare innovation using practical ethical reasoning tools. 

Later in the year, I participated in the Inaugural Symposium of the Centre for Neurotechnology and Law at the British Library, contributing to a panel discussion on whether emerging neurotechnologies require specific legislative protections for neurorights, given the uniquely intimate nature of neural data. 

Industry / Academia Collaboration 

Industry Advisor, Leadership Team of Oxford Winter Neuroethics School 

A key part of my ongoing role as a Visiting Fellow with the Neuroscience, Ethics and Society (NEUROSEC) group in the Department of Psychiatry at the University of Oxford is my participation as Industry Advisor on the Leadership Team of the Oxford Winter School in Neuroethics (OWNS)

OWNS is an intensive programme designed to explore cutting-edge research in Neuroethics. Led by experts from the University of Oxford’s Neuroscience, Ethics and Society (NEUROSEC) Team, the course brings together distinguished Neuroethics researchers from around the globe, offering participants a unique opportunity to engage with leaders in the field. Our goal is to equip the next generation of Neuroethicists with the tools and insights needed to address the complex methodological challenges within this evolving field. 

The curriculum is uniquely designed to be highly useful to researchers and professionals who want to develop a specialism in Neuroethics. The course adopts an interactive hands-on approach to learning the skills necessary for a career in Neuroethics and adjacent fields. It is especially suitable for those intending to pursue doctoral or postdoctoral academic research in Neuroethics; to complement a medical research career with training in Neuroethics; or to pursue an industry or policy career that requires or benefits from advanced Neuroethics skills and knowledge.   

Over 2019-2022, OWNS’ Director Dr. David Lyreskog and I jointly led its design, prior to me leaving academia for IGS. Since then David and the team have worked assiduously to make OWNS a reality, and it is enjoying its inaugural year with great success this academic year. 

Since OWNS has a dual focus, catering not for people wishing to pursue academic careers, but those beyond industry as well, my role is to provide support and advice to those on the cohort who are seeking to do the latter, using the skills in ethical analysis and research design that they learn to advance careers in sectors beyond the academy. Many of these activities will focus in particular on the in-person ‘Learning Accelerator’ component of OWNS, taking place in Oxford in January. There will be more to say about this once the course has concluded, so for now watch this space! 

Academic journal articles 

Having been out of full-time academia for more than a couple of years now, it’s always gratifying to remain involved in academic writing collaborations. It’s great to continue contributing all these years of training and experience in theoretical and applied ethics to the academic research process on behalf of IGS, in the academic currency of peer-reviewed journal papers, as well as the peer-review process itself being a helpful way to make sure that I’m still meeting the required standards of analytic rigour.  

To that end, over 2025 I’ve enjoyed co-authoring two papers which are on their way through the academic publication pipeline. I’ll hold back the details of the first of these, as it’s undergoing review following resubmission after peer review, but I hope that it’ll be possible to say more about this before long. 

The second, however, has just been accepted for publication in the Sage journal Medicine, Science and the Law and should be online and in print before long and has been led by Dr. Georgia Ashworth at the Royal Free London NHS Foundation Trust, with the genesis and marshalling of the paper coming from Dr. John Tully, Clinical Associate Professor in Forensic Psychiatry & MRC Clinician Scientist Fellow at the University of Nottingham. The paper investigates a fascinating topic in clinical and data ethics; namely, the ethics of electronic monitoring in forensic psychiatry, which given ethical and legal tensions between individual rights and societal duties, and the theoretical and empirical complexities of mental capacity, is a timely and important focus of ethical analysis in a clinical context. I’m grateful to Georgia and John for the invitation and opportunity to co-author the paper, and I look forward to seeing it in press soon!  

Looking ahead to 2026 

If data and AI ethics 2025 reinforced one message, it is this: ethical governance is no longer a parallel conversation to effective, reliable, efficient and safe innovation; rather it is a precondition of it. Whether in genomics, neurotechnology, AI deployment, energy, insurance, or any other sector, organisations that lead on ethics as well as compliance will be those best equipped to innovate responsibly, earn trust, and remain resilient in a rapidly shifting regulatory and technological environment. So, as we move into 2026, IGS will continue advocating for governance grounded in ethics and designed for the real world. I look forward to continuing this work with our partners, collaborators, clients, and wider community. 

Share:

More Posts

Send Us A Message