Artificial intelligence (AI) has significant potential to enhance medical care, for example, by performing at least as well as humans in particular healthcare tasks (Davenport & Kalakota, 2019). While harnessing the potential of AI is important, it makes sense for us to be concerned about how AI is used in medical care; after all, we all have an interest in receiving transparent, responsible, robust, and competent medical services.
However, it is also important to ask whether or not people should be informed about the use of AI in their medical care. As it turns out, the view that they should is fairly common. According to a report by the Health Foundation in 2024, ‘the majority of people think they should be told when AI has been used in health care, but it is particularly important for people 65 years and older’ (Thornton, Binesmael, Horton, & Hardie, 2024).
Inspired by Joshua Hatherley’s (2024) recent discussion, the purpose of this article is to explore the question outlined. I begin by presenting some key use cases of AI in healthcare. Then, I will examine a few possible ways in which the use of AI in medical care could be disclosed to patients, before explaining why the question of ‘which AI uses to disclose’ remains a significant challenge.
AI Uses in Healthcare
AI has been widely used in healthcare, for example in assisting with diagnosis of diseases, producing appointment letters, and determining the best medicines or treatment plans for patients. Multiple AI technologies are often involved in achieving these tasks:
- Machine learning (ML): ML involves teaching computers to recognise patterns in data, enabling them to make predictions or decisions without being explicitly programmed. For example, in healthcare, ML is often used to predict the success of treatments and assist radiologists in identifying cancerous lesions in medical images such as X-rays or MRIs (Erdaw & Tachbele, 2021).
- Natural Language Processing (NLP): NLP enables computers to understand, interpret, and generate human language. In healthcare, it is used for tasks such as speech recognition, transcribing spoken medical notes into written text, and analysing clinical documentation. For example, NLP can help extract important medical information from patient records to improve diagnosis and treatment planning (Koleck, Dreisbach, Bourne, & Bakken, 2019).
- Expert Systems: These systems contain expert knowledge and a set of rules within a specific domain. In healthcare, they have been widely used for clinical decision support, assisting doctors in making treatment decisions based on established guidelines (Saibene, Assale, & Giltri, 2021).
- Physical Robots: These include surgical robots, for instance, which are increasingly used in healthcare to assist with surgeries. Such robots, used in procedures like gynaecological, prostate, and head and neck surgeries, enhance precision and reduce recovery times for patients. For instance, robotic systems allow surgeons to perform minimally invasive procedures with greater accuracy than traditional methods (Saibene et al., 2021).
- Process Automation: AI-assisted software is used to automate routine administrative tasks in healthcare, such as processing insurance claims, managing medical records, and overseeing revenue cycles (Davenport & Kalakota, 2019).
- Algorithmic Personalisation: AI technologies help improve health outcomes by personalising care based on an individual’s needs and behaviours. For example, AI tools can send reminders to patients to take their medications or follow treatment plans, helping them stay on track with their healthcare goals (Schork, 2019).
Disclosing AI Uses to Patients: How Can This Be Done?
Before discussing the challenges of disclosing AI uses to patients, we must first clarify how such uses might be communicated. Here are some examples, although the list below is not exhaustive.
First, clinicians can inform patients during consultations that AI systems are being used to support their diagnosis, treatment, or care management. For example, a radiologist might tell a patient that she has used a computer system, powered by AI, to detect abnormalities more accurately.
Second, AI-related information can be included in patient consent forms. For example, in some institutions where AI is used for surgical planning, such as in robotic-assisted surgeries, the consent form may include a section detailing the use of AI.
Third, information sheets or brochures can be provided to patients, explaining how AI is integrated into their care and its benefits. For instance, IBM Watson uses AI to help oncologists generate treatment recommendations, and patients may receive pamphlets explaining that AI analyses medical research to support the doctor’s recommendations.
Fourth, clinicians or healthcare institutions can provide information about AI usage through patient portals or electronic health record (EHR) systems. When patients log in to view their records, they could be notified of AI’s involvement in specific diagnoses or treatments. For example, in systems like Epic EHR, where AI tools assist in risk prediction (such as identifying patients at high risk for sepsis), patients could see a message about AI’s role in the relevant risk assessment.
Fifth, clinicians can clarify that the technology is enhancing their clinical judgment rather than replacing it when AI systems are used for offering second opinions or as decision support tools. For example, in radiology, a radiologist could inform the patient that she used an AI system as an additional check to ensure nothing was missed in the scan.
Sixth, clinicians can explain to patients how the AI system tailors treatment to their unique data. For example, a clinician could inform patients that their treatment plans are generated by AI tools that analyse past medical research findings to provide personalised recommendations.
Seventh, AI can be integrated into shared decision-making, where patients are involved in the final decision based on the AI’s insights alongside the clinician’s expertise. For example, if AI is used to evaluate treatment options for a cancer patient, the clinician could explain that the AI tool has analysed various options based on the patient’s medical history and outcomes from similar cases. However, the patient retains the right to choose between these treatment options themselves.
Finally, healthcare institutions can publish formal policies or statements on their websites or platforms outlining how AI is used within their system. For example, they could provide details of typical AI use cases in their medical processes.
Disclosing Uses of AI in Medical Care: Preliminary Grounds
The claim that clinicians and healthcare institutions should disclose their use of AI in providing medical care often arises from the following considerations: (1) data privacy, security, and trust, and (2) bias and fairness. These considerations are emphasised by the GDPR, DPA, and EU AI Act, but they also intersect with various other frameworks regulating healthcare institutions and professionals. Some also defend this claim with reference to risk, rights, materiality, and autonomy, as we shall see from Hatherley’s (2024) analysis.
The GDPR and DPA regulate the collection, storage, and use of patient data to ensure privacy, security, and accountability. They require healthcare providers to collect only necessary data, store it securely, and use it transparently, with patients’ explicit consent. These regulations also enforce patients’ rights to access, correct, or delete their data. The EU AI Act complements these by ensuring that AI systems processing patient data are safe, ethical, and transparent, requiring risk assessments, robust data governance, and clear accountability for AI system developers and users.
However, AI systems often require large amounts of data to function effectively, sometimes sourced from multiple patients or pooled from different healthcare institutions. This raises concerns about whether patients are fully informed about how their data is being used, shared, or de-identified for AI training. Transparency about data usage is crucial for maintaining trust. If patients are not fully informed about how their data is being used to train or operate AI systems, it could violate their right to privacy. It could also raise ethical concerns about data ownership, particularly when AI companies are involved.
Meanwhile, AI systems can inherit biases from the data on which they are trained , resulting in differential treatments for different patient groups. For instance, AI tools could offer unequal recommendations for patients of different ethnicities, genders, or socioeconomic backgrounds. In the UK, the NHS aims to provide universal and equitable healthcare, making the issue of fairness in AI particularly important. If AI systems used in the NHS result in biased outcomes, this could potentially violate the NHS Constitution (2023), which guarantees that healthcare is provided equally to all based on clinical need, not patient demographics.
However, without transparency, patients and healthcare providers might not recognise or be able to address the potential biases embedded in AI algorithms, which can disproportionately affect marginalised groups. Uninformed patients lack the ability to question or challenge AI-driven decisions that may be flawed due to biased data, thereby perpetuating inequalities in diagnosis, treatment, or care. Disclosure, it could be argued, enables patient scrutiny and fosters patient involvement, which is crucial for mitigating biases and ensuring that AI systems in medical care serve all populations equitably.
There is another way to characterise the problems of not disclosing information about AI uses to patients. Recently, Joshua Hatherley provided a succinct summary of four key arguments for clinicians to disclose their use of AI to patients: the risk-based argument, the rights-based argument, the materiality argument, and the autonomy argument. Although Hatherley himself is critical of these arguments, and they overlap slightly with the considerations mentioned earlier in this section, they nonetheless offer a useful perspective on why disclosing AI use is valuable in medical care. Here are the four arguments (Hatherley, 2024):
- The Risk-Based Argument: Clinicians are ethically required to disclose the use of ML systems because they introduce significant risks to patient safety. These risks include adversarial attacks, robustness challenges, and algorithmic biases. Adversarial attacks involve the deliberate manipulation of data fed into ML systems, causing incorrect predictions or classifications, which could harm patients. Robustness challenges occur when ML systems, trained in specific environments, do not generalise well to new clinical settings, potentially producing erroneous outputs. Finally, algorithmic bias can disproportionately affect underrepresented patient groups, resulting in unequal treatment outcomes (Kiener, 2021).
- The Rights-Based Argument: Prominently defended by Ploug and Holm (2020), the argument asserts that patients have a moral right to refuse the use of medical machine learning (ML) systems in their diagnostics and treatment planning, which obligates clinicians to disclose the use of such systems. This right is rooted in the broader principle that patients have the right to act on rational concerns about the future, such as fears that AI could surpass human doctors or monopolise healthcare decisions.
- The Materiality Argument: Clinicians must disclose the use of ML systems because such information is crucial to a patient’s decision-making. Information is considered material if it would influence a substantial number of patients to change their consent decisions. Given the heightened public concern and media attention surrounding AI, many patients may have strong preferences about whether AI systems are used in their care. Additionally, the phenomenon of algorithmic aversion—the tendency for people to distrust algorithms, even when they outperform humans—suggests that patients might avoid care involving AI, whereas this consideration is material to their decision-making as well (Findley, Woods, Robertson, & Slepian, 2020).
- The Autonomy Argument: Clinicians are obligated to disclose the use of ML systems because failing to do so could undermine patient autonomy and compromise the ethical ideal of shared decision-making. Shared decision-making is a collaborative process where clinicians and patients discuss treatment options and consider evidence to reach an informed decision. However, ML systems often embed certain ethical values or prioritise specific outcomes (e.g., favouring life extension over quality of life), which may conflict with individual patient preferences. Also, the opacity of some ML systems might prevent clinicians from fully explaining their diagnostic or treatment recommendations, limiting patients’ ability to participate meaningfully in decision-making (Beauchamp & Childress, 1994).
Which AI Uses Should Be Disclosed?
There is a fundamental question that underlies many of the ethical and policy challenges surrounding the obligation for clinicians to disclose the use of AI to patients: which uses of AI should be disclosed?
There are several other considerations that mark the significance of this issue. First, not all uses of AI directly impact patient outcomes. Disclosing every use of AI could overwhelm patients with information that is not relevant to their care or decision-making process. For example, if AI is used only in non-clinical contexts, such as hospital scheduling, its significance to the patient may be minimal. However, the situation becomes much more complex when AI systems are directly involved in clinical decisions, such as diagnosis or treatment planning, where patients might reasonably expect disclosure.
Second, there are questions about the threshold of materiality—or, in simpler terms, what constitutes ‘relevant’ information to decision-making. The materiality argument suggests that clinicians should disclose AI use if it is material to a patient’s decision-making. However, as Hatherley points out, defining what counts as material information varies considerably from individuals. Materiality hinges on whether the information would influence a reasonable patient’s decision to consent to treatment. Yet, patients’ preferences and attitudes toward AI could be very different. Disclosing AI use to some patients might lead to decision fatigue or unnecessary anxiety, especially considering the general algorithmic aversion many people exhibit.
In particular, clinicians are generally not required to disclose every factor that influences their clinical decisions, such as their own prior experiences or consultations with other experts. It is not immediately clear why the influence of AI should be treated differently from these other factors, unless it can be demonstrated that AI plays a more central or significant role in the decision-making process (Hatherley, 2024, p. 2).
Third, a major challenge in determining which AI uses should be disclosed is the ‘black box’ nature of many machine learning models, which often function in ways that are not easily interpretable by clinicians (Facchini & Termine, 2022). Even if AI use is disclosed, clinicians may struggle to answer patient questions about how the system works or why it made a particular recommendation, as they themselves may not fully understand the AI’s processes. For instance, if a radiologist uses an AI system to identify cancerous cells but cannot explain why the AI flagged certain areas as suspicious, disclosing the AI’s role might simply cause further confusion and anxiety for the patient without improving their understanding of the diagnosis.
Finally, disclosing every use of AI might shift the focus away from the important aspects of medical care and turn the disclosure process into a mere formality, rather than genuinely informing patients. This approach could lead to a situation where the emphasis is on meeting ethical requirements rather than effectively communicating relevant information that impacts patient care.
For instance, if a clinician discloses every use of AI, including administrative tasks like appointment scheduling or billing, alongside critical AI applications like diagnostic imaging, inappropriate uses of AI systems could be socially licensed ‘on the grounds that, by disclosing their every use of a medical ML system, a clinician has…done their due diligence with respect to the patient’ (Hatherley, 2024, p. 6). This might shift their focus away from important clinical aspects, such as understanding their diagnosis and treatment options, turning the disclosure process into a formality rather than enhancing the patient’s ability to make informed decisions about their care.
Conclusion
While it remains unclear what specific information clinicians should disclose to patients and how, it is feasible to educate the public about the role of AI in modern healthcare, as well as the relevant issues surrounding data protection and ethics. Regardless of how the ethical and regulatory landscape of AI disclosure evolves, it is crucial to build strong literacy in responsible AI and data practices to adapt to such changes. Working closely with healthcare institutions, IGS offers a range of training modules in AI, data protection, and ethics. Please contact us to discuss the training options available for individuals or organisations.
References
Beauchamp, T. L., & Childress, J. F. (1994). Principles of biomedical ethics: Edicoes Loyola.
Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future Healthc J, 6(2), 94-98. doi:10.7861/futurehosp.6-2-94
Erdaw, Y., & Tachbele, E. (2021). Machine learning model applied on chest x-ray images enables automatic detection of COVID-19 cases with high accuracy. Int J Gen Med, 14, 4923-4931. doi:10.2147/ijgm.S325609
Facchini, A., & Termine, A. (2022, 2022//). Towards a taxonomy for the opacity of AI systems. Paper presented at the Philosophy and Theory of Artificial Intelligence 2021, Cham.
Findley, J., Woods, A., Robertson, C., & Slepian, M. (2020). Keeping the patient at the center of machine learning in healthcare. The American Journal of Bioethics, 20(11), 54-56. doi:10.1080/15265161.2020.1820100
Hatherley, J. (2024). Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients? J Med Ethics. doi:10.1136/jme-2024-109905
Kiener, M. (2021). Artificial intelligence in medicine and the disclosure of risks. AI & SOCIETY, 36(3), 705-713. doi:10.1007/s00146-020-01085-w
Koleck, T. A., Dreisbach, C., Bourne, P. E., & Bakken, S. (2019). Natural language processing of symptoms documented in free-text narratives of electronic health records: a systematic review. Journal of the American Medical Informatics Association, 26(4), 364-379. doi:10.1093/jamia/ocy173
The NHS Constitution for England. (2023). Department of Health & Social Care
Ploug, T., & Holm, S. (2020). The right to refuse diagnostics and treatment planning by artificial intelligence. Med Health Care Philos, 23(1), 107-114. doi:10.1007/s11019-019-09912-8
Saibene, A., Assale, M., & Giltri, M. (2021). Expert systems: Definitions, advantages and issues in medical field applications. Expert Systems with Applications, 177, 114900. doi:https://doi.org/10.1016/j.eswa.2021.114900
Schork, N. J. (2019). Artificial intelligence and personalized medicine. Cancer Treat Res, 178, 265-283. doi:10.1007/978-3-030-16391-4_11
Thornton, N., Binesmael, A., Horton, T., & Hardie, T. (2024). AI in health care: What do the public and NHS staff think? Retrieved from https://www.health.org.uk/publications/long-reads/ai-in-health-care-what-do-the-public-and-nhs-staff-think