On 5th July, IGS had the pleasure to help organise the inaugural event for ORUK’s AI in MSK Education Series: AI Ethics, Consent and Data Fairness. I am thrilled to have discussed the importance of trustworthiness for institutions processing patient data.
There is no question about the significance of ‘trust’ and ‘trustworthiness’ for anyone handling sensitive data. What is really troubling is how we realise these values. ‘Trustworthiness’, or ‘trust’, is highlighted by the EU AI Act, as well as various other regulations that continue to shape the landscape of ethical AI and data. Although many of us know it is important, we rarely clarify the senses in which it is important. Nor do we have a clear picture of what trustworthy data practices would require.
In this article, I would like to share my reflections on that topic, especially for readers who were not able to join the workshop. Here are the key insights I highlighted at the workshop:
- There is no clear definition of ‘trustworthiness’. Because it can mean different things, it is important to follow whatever legal and ethical rules that apply to data practices. Any illegal or unethical data practice can undermine your trustworthiness.
- Trust—people’s confidence in your institution or practice—is best built by showing your commitment to legal and ethical data practices. This means being open about how you handle data and why your practices are ethical and lawful. It also means improving your team’s understanding of compliance and ethics in AI and data.
- In the context of patient data, ‘trustworthiness’ is secured by avoiding illegal or unethical processing of patient information. But to secure ‘trust’, you need to show your trustworthiness. This means making patients and the general public aware of your data practices. It is also important to show your commitment to handling patient data ethically.
Trust and Trustworthiness: Approaching the Ambiguity
First, ‘trust’ and ‘trustworthiness’ should not be conflated. Trust, in essence, points to one’s attitude. It is about one’s confidence in something. But as Onora O’Neil (2020)—a British philosopher and a member of the House of Lords—argues,
Trustworthiness needs to be evidenced by establishing that agents and institutions are likely to address tasks and situations with reliable honesty and competence…Evidence of attitudes is therefore not usually an adequate basis for claiming that others are or are not trustworthy in some matter.
The implication of this distinction is simple: one might trust an individual or organisation, but that does not mean they are trustworthy. If I deceived you into thinking I would not pass your data to a third party, but I did, you would trust me, yet I would not be trustworthy. Therefore, to understand what trustworthiness requires in the context of patient data, we should first clear the misconception that ‘trustworthiness’ is just about establishing ‘trust’.
But what exactly does ‘trustworthiness’ require? While many of us are seeking an action-guiding conception of trustworthiness that helpfully instructs us on which AI and data practices to avoid, the reality is that there has never been a conception of trustworthiness among the major ethical and regulatory frameworks of AI and data that clearly states what lies outside the sphere of trustworthy practices. In fact, trustworthiness is often associated with various other ethical values in AI and data, making it more difficult to operationalise.
Along this line, Karoline Reinhardt (2023, p. 735) offered a shrewd insight:
currently AI ethics overloads the notion of trust and trustworthiness and turns it into an umbrella term for an inconclusive list of things deemed “good”. Presently, “trustworthiness”, thus, runs the risk of becoming a buzzword that cannot be operationalized into a working concept for AI research.
To see why Reinhardt’s comment makes sense, consider the many interpretations of ‘trustworthiness’:
- Nvidia : ‘Trustworthy AI is an approach to AI development that prioritizes safety and transparency for the people who interact with it’ (Pope, 2024).
- European Commission’s ethics guidelines for trustworthy AI: ‘trustworthy AI should be: (1) lawful—respecting all applicable laws and regulations, (2) ethical—respecting ethical principles and values, and (3) robust—both from a technical perspective while taking into account its social environment’ (EU, 2019).
- The National Data Guardian (GOV.UK, 2022): trustworthy data practices must satisfy the following conditions: legal compliance; strong privacy protections; a commitment to transparency; establishing and demonstrating public benefit; ensuring appropriate mechanisms for choice; sharing power with the public.
- OECD (2024): Trustworthy AI involves respecting ‘human rights and democratic values’.
- US NIST (2023): There are seven features of trustworthy AI: ‘valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy enhanced, and fair with harmful bias managed’.
Even in the prevailing discussions of patient data, the concept of ‘trustworthiness’ does not get any more clarity. Consider the following attempts to interpret the term:
- Sheehan et al. (2021): ‘thinking about the values associated with research in the NHS (like the direct relationship to patient benefit) and developing processes that reflect and secure these values, works towards trustworthiness’.
- Mol and Ioannidis (2023): Trustworthiness (in medical publications) is to be established by ‘verifiable data’, transparency and safety.
- Nong (2023): Trustworthiness requires that healthcare institutions should demonstrate that ‘they are capable of handling data appropriately and protecting it from breaches’, that they truly serve patients’ needs, that healthcare professionals have ‘open communication with patients, even when that communication does not advantage [patients]’, and finally that ‘patients’ voices can be centered in tool design, development, implementation, and evaluation’.
From the examples above alone, there are over 10 values that have been linked to ‘trustworthiness’: safety, transparency, lawfulness, robustness, explainability, fairness, privacy, security, accountability, public benefit, choice, power-sharing, human rights, democratic values, participatory design, and so on.
This puts us into a dilemma. Either (1) we think of ‘trustworthiness’ as a synonym compliance and/or ethics, or (2) we treat ‘trustworthiness’ as a distinctive moral value, which at least has some components additional to other values.
From a scholarly standpoint, of course, option (2) sounds more promising. As researchers in philosophy, we spend a lot of time clarifying the differences between concepts. But from a pragmatic point of view, organisations need to know exactly what they should do to navigate the challenges related to trust and trustworthiness, rather than investing too much effort in conceptual work. The best option to understand ‘trustworthiness’ in a realistic organisational context, I argue, is to treat it as equivalent to compliance and ethics.
One might question whether this gives us any guidance at all, because ensuring compliance and ethics in the context of patient data is no less ambiguous than figuring out what trustworthiness requires—perhaps even more ambiguous. But this ambiguity of compliance and ethics precisely highlights why you need people like us—data protection and ethics consultants who help you establish the best policies and procedures to mitigate any legal and ethical risks you might encounter.
The legal and ethical landscape of AI and data is evolving quickly. Similarly, the rapid change in this field means that the concept of ‘trustworthiness’ will be loaded with a wider range of considerations over time. Therefore, rather than focusing on what ‘trustworthiness’ requires, we should concentrate on establishing the right policies and procedures that enable us to respond to any new ethical and legal challenges that arise in how we process patient data.
Institutions Processing Patient Data
To that end, we must first understand what kind of institutions have access to patient data. It is certainly not the case that only medical organisations and professionals (e.g., hospitals, clinicians, doctors, nurses, public health departments, labs) handle medical data. Various other entities also have access to patient data. Here are some examples:
- Online pharmacies: They need patient data to verify prescriptions and manage medication orders and delivery.
- Health insurance providers: They use patient data to process claims, determine coverage, and manage benefits.
- Universities: They use patient data for educational purposes, clinical training, and academic research, often with patient consent.
- Research institutions: They use patient data to conduct medical research, analyse health trends, and develop new treatments.
- Clinical trial organisations: They use patient data to assess the efficacy and safety of new drugs or treatments during clinical trials.
- Health apps: They gather patient data to provide health recommendations, track health metrics, manage chronic conditions and so on.
- Wearable device companies: They might capture patient data through devices to monitor health metrics like heart rate, physical activity, and sleep patterns.
- Clouds: They store and process patient data remotely.
Also, many organisations do not work within a self-contained data ecosystem. In many cases, institutions pass on patient data to each other, leading to potential legal and ethical risks. Consider the following examples:
- Hospital to Insurance Company: A hospital might send patient treatment records to a health insurance provider to process claims and verify coverage.
- Fitness App to Marketing Firm: A fitness app might collect health and activity data from its users and sells anonymised data to a marketing firm for targeted advertising.
- Pharmacy to Tech Company: An online pharmacy might partner with a tech company to develop a new medication management app. The pharmacy shares patient medication and adherence data with the tech company for integration into the app.
- Doctor’s Office to Employer: A doctor’s office might offer aggregated health data to a patient’s employer as part of a workplace wellness programme.
- Telehealth Service to Data Analytics Company: A telehealth service provider might share patient consultation data with a data analytics company to analyse healthcare trends and improve service delivery.
- Genetic Testing Company to Pharmaceutical Company: A genetic testing company might share patient genetic data with a pharmaceutical company to assist in developing targeted therapies.
- Health Research Institute to Academic Journal: A health research institute might submit patient data as part of a study to an academic journal for publication.
The number of cases I can offer here is limited, as there are many other ways in which an institution passes patient data onto another. However, there is a special type of cases that is worthy of our attention given the our increasing reliance on AI to enhance productivity.
Productivity-Enhancing AI Tools and Patient Data
AI tools have become an inseparable part of our everyday work. This applies to those working in medical institutions as well. While AI tools make our work much more efficient, they might sometimes collect or process patient data that require careful treatment. Consider the following examples in which productivity-enhancing AI tools have access to patient data:
- Healthcare Administrator Using AI for Report Generation: An administrator at a hospital could be tasked with creating patient reports for physicians. To streamline the process, the administrator could use generative AI that can quickly draft these reports based on patient data inputs.
- Consultations on Virtual Meeting Software: A therapist conducts online therapy sessions using a virtual meeting software that records and transcribes sessions for note-taking purposes.
- Online Software for Collaborative Work: A researcher working on a clinical study uploads patient data to a cloud-based collaboration platform to share with colleagues.
- AI Tools for Billing: Administrative staff at a medical practice could employ an AI tool to streamline the medical billing and coding process by inputting patient treatment details and billing information.
- Online Conversation Tools: A nurse uses an AI-powered communication tool to communicate to patients regarding their treatment plans and appoints.
- AI Assistants for Medical Transcription: A GP uses an AI-powered virtual assistant to transcribe patient consultations and generate visit summaries. During consultations, the GP inputs or dictates detailed patient information, including medical histories, symptoms, diagnoses, and treatment plans, into the AI assistant.
These cases remind us that trustworthy handling of patient data affects how we ought to use productivity-enhancing AI tools as well.
Building Trustworthiness and Trust
Three Pillars of Trustworthiness
How can you make your data practices more trustworthy, evidenced by your conformity to legal and ethical standards? At the very least, we believe you should be committed to the three initiatives below.
First, external scrutiny. Your data practices should be scrutinised by a third party individual or organisation. An unfortunate reality is that, in almost every organisation, there will be power dynamics that impede one’s willingness to bring questionable data practices to the table. For example, if an employee knows that her career depends on accepting her company’s data practices as they are, even if such practices are legally or ethically risky, she might hesitate to raise these concerns. In these cases, you might want to seek external institutions (e.g., IGS) to oversee your data practices, as they are less embedded in your organisation’s ecosystem and thus less vulnerable to power considerations when they give advice.
To be sure, some consultancies, for the sake of maximising profit, would report their clients’ problematic practices only selectively. However, all consultants at IGS have had rigorous legal or ethical training and take impartiality seriously. We have helped numerous clients identify the legal and ethical risks associated with their data practices. Because it is our core mission to help everyone develop ethical and legal practices of AI and data, we will never compromise that mission for profit.
Second, AI and data literacy building. This does not simply involve developing your team’s understanding of how AI and data systems work but also their relevant legal and ethical risks. As technology evolves, your team is likely to embrace a wider range of AI tools that enhance productivity. However, the risks of these tools tend to be less visible to those who lack an understanding of how they work. For example, someone who relies heavily on generative AI to assist with report writing might feed sensitive information to the tool without realising that generative AI systems typically improve themselves by collecting and learning from user input. Therefore, it is crucial that your team has robust knowledge of the basic structure of AI and data systems so they can assess the relevant risks.
Also, your team should have sound knowledge of the ethical and legal considerations that apply to your AI and data practices. IGS prides itself on being an education provider in ethical AI and data, in addition to the established consulting services we offer to clients. We will soon be launching our first set of training modules—Introduction to Data Ethics and Key Issues in AI Ethics—for any organisation or individual who wants to acquire that knowledge.
Third, it is essential that your team establishes a culture where everyone regularly reflects on data practices or AI use cases that might induce legal and ethical risks. This can be achieved by holding periodic workshops for the team to discuss the challenges they face in relation to AI and data. Whether it concerns team members’ AI or data literacy or the risks associated with technology, open discussions will be very helpful as they enable everyone to address problems together. IGS can also help you develop relevant policies and tailored workshops and training.
Key to Building Trust: Transparency, Articulated Well
The key to building trust—that is, to build people’s confidence in how you handle data in general—is to be transparent in three steps:
- Step 1: You should be transparent about the impact of your data practices on your trustees.
- Step 2: You should be transparent about the possible ways in which you process patient data.
- Step 3: You should be transparent about the efforts you have taken to ensure legal and ethical handling of data.
Why are these steps important? Research has found that
Providing transparent information about who will benefit from data access was the most important measure to increase trust, endorsed by more than 50% of participants across 20 [including the UK] of 22 countries. It was followed by the option to withdraw data and transparency about who is using data and why (Milne et al., 2021).
Clearly, transparency is what people are taking more seriously when it comes to trust. If you can also demonstrate publicly the efforts you have made to ensure compliance and ethics, this will convince people even further that you are genuinely committed to processing data responsibly.
It would be a shame, however, if you are committed to transparency but struggle to make people see that commitment because you do not know how best to articulate your efforts in this regard. Our consultants can help here, as we have a team of competent writers who understand how to articulate your AI and data practices using accessible language, in the form of reports and policy documents, for instance.
References
EU. (2019). Ethics guidelines for trustworthy AI. Retrieved from https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
GOV.UK. (2022). The data strategy: a blueprint for the evolution of a trustworthy data system? Retrieved from https://www.gov.uk/government/news/the-data-strategy-a-blueprint-for-the-evolution-of-a-trustworthy-data-system
Milne, R., Morley, K. I., Almarri, M. A., Anwer, S., Atutornu, J., Baranova, E. E., . . . Middleton, A. (2021). Demonstrating trustworthiness when collecting and sharing genomic data: public views across 22 countries. Genome Medicine, 13(1), 92. doi:10.1186/s13073-021-00903-0
Mol, B. W., & Ioannidis, J. P. A. (2023). How do we increase the trustworthiness of medical publications? Fertility and Sterility, 120(3), 412-414. doi:10.1016/j.fertnstert.2023.02.023
NIST. (2023). AI Risks and Trustworthiness. Retrieved from https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Foundational_Information/3-sec-characteristics
Nong, P. (2023). Demonstrating Trustworthiness to Patients in Data-Driven Health Care. Hastings Center Report, 53(S2), S69-S75. doi:https://doi.org/10.1002/hast.1526
O’Neill, O. (2020). Questioning trust. In The Routledge handbook of trust and philosophy (pp. 17-27): Routledge.
OECD. (2024). OECD AI Principles. Retrieved from https://oecd.ai/en/ai-principles
Pope, N. (2024). What is trustworthy AI? Retrieved from https://blogs.nvidia.com/blog/what-is-trustworthy-ai/#:~:text=Trustworthy%20AI%20is%20an%20approach,people%20who%20interact%20with%20it.
Reinhardt, K. (2023). Trust and trustworthiness in AI ethics. AI and Ethics, 3(3), 735-744. doi:10.1007/s43681-022-00200-5
Sheehan, M., Friesen, P., Balmer, A., Cheeks, C., Davidson, S., Devereux, J., . . . Shafiq, K. (2021). Trust, trustworthiness and sharing patient data for research. Journal of Medical Ethics, 47(12), e26-e26. doi:10.1136/medethics-2019-106048



