Trust in the collection and use of health data

It is often said that maintaining, building or restoring public trust is an essential step for an organisation seeking to unlock the power of data and data-driven technologies. The following offers a consideration on what exactly we mean by ‘trust’, in order to shed some light on the role of public trust in the collection and use of health data.

What is Trust?

According to most philosophical accounts, there are two basic conditions of trust. First, trust is a relation involving two people and a task. Second, trust involves expectations about both competence and willingness. If I trust my doctor to act in the best interests of my health, I believe that she has the necessary skills and is willing to exercise them. Because trust involves expectations on the part of the trusting person, trust also involves uncertainty and risk. Trusting someone involves taking a ‘leap of faith’ about how the other person will act.

Why might trust be important for health data?

Health data is information that is often routinely collected as part of a person’s interactions with the health service. Health data can be highly useful to NHS bodies, academics, and commercial organisations for research and planning purposes. Large datasets of health data have the potential to transform patient care, by enabling the development of life-saving treatments and technologies.

In some cases, public trust can serve as a requirement for unlocking the potential benefits of the use of health data. For example, if the NHS does not maintain public trust, patients may be unwilling to share their information outside of their GP practice. As highlighted in the Department of Health and Social Care’s data strategy, “Data Saves Lives: reshaping health and social care with data”[1], opt-outs on a large scale can compromise the quality of the data, and therefore the usefulness of the dataset as a whole.  It is therefore unsurprising that public trust is often a central feature of NHS strategies and policies concerning the use of health data.

How can organisations gain trust?

Transparency is often advocated as one of the measures that may foster public trust in the collection and use of data[2]. However, it is important to remember that trust involves expectations about both competence and willingness. Whether a person chooses to trust or not will inevitably be shaped by that individual’s experiences, beliefs, and knowledge. As a result, not everyone will trust an organisation to handle their health data, no matter how warranted that trust may be.

Dr Natalie Banner, who previously led the Understanding Patient Data initiative hosted at the Wellcome Trust, distinguishes between trust and trustworthiness[3]. While trust depends on features of the person placing their trust, trustworthiness depends on features of the object of trust. A trustworthy person is someone for whom our trust is well-grounded. Dr Banner argues that, rather than relying on individuals’ subjective feelings of trust, organisations should instead concentrate on objective practices and behaviours to demonstrate their trustworthiness.

The Data Ethics Framework[4] and the Wellcome Trust’s Understanding Patient Data initiative[5] can be useful starting points for establishing trustworthy systems for using health data.

What are the limits of trust?

An independent report published by the Department of Health and Social Care, “Better, broader, safer: using health data for research and analysis”[6], recently asked the NHS to acknowledge in their policies the shortcomings of ‘trust’ as a technique to manage patient privacy. The report detailed that, as the number of people accessing health data grows, so too does the risk of there being untrustworthy individuals having access. While there are various administrative systems in place to ensure that the overwhelming majority of health data users are trustworthy, it is well documented that large datasets are often misused.

These genuine risks to privacy should not be downplayed or ignored. Systems and services that house health data must have appropriate measures in place to ensure that they are resilient to ‘bad actors’ or untrustworthy users. These mechanisms should work to protect this information from misuse. In particular, the independent report recommended the adoption of ‘trusted research environments’ (TREs) to effectively manage privacy.  

Trust or Confidence?

In philosophical accounts of trust, there are two other terms which are useful to distinguish from trust: ‘reliance’ and ‘confidence’. Reliance is dependence based on the likely prediction of the other’s behaviour. There are cases in which we can rely on a system that we do not trust. In these cases, we might instead believe that there are appropriate assurances or guarantees that will protect the system against failure. When these measures are in place, we obviate the need for trust. For example, we might rely on the bus to take us to work even though we hold no beliefs about either the competence or the willingness of the bus driver. Mackenzie Graham has called this form of reliance “assured reliance”, or confidence[7].

We need not be required to trust the data systems and infrastructure collecting and using our health data, but we should be confident relying on them and the structures which govern them. This requires that the systems within which our health data is shared are worthy of confidence.

To have a confidence-worthy system of data requires there to be assurances or guarantees to mitigate the privacy risks, which can be provided by robust technical and governance systems. For example, a clear data sharing agreement can function as an assurance by specifying the purposes of the data sharing, what data will be shared, who will have access to the data, and what security measures will be in place.

The role of trust

Of course, in any system, some uncertainty and risk will remain. When no guarantees or assurances are available, we may still need to trust that organisations will act responsibly with our health data. For example, the lack of clear and definitive laws and regulations to govern the deployment of fair algorithms, means that we will sometimes be required to trust that that organisations have trained their algorithms on datasets that are appropriately representative of the population.

While this example does not cover all the situations in which trust may still be necessary, it is clear that there is still a role for trust in the handling of health data.


[1] https://www.gov.uk/government/publications/data-saves-lives-reshaping-health-and-social-care-with-data/data-saves-lives-reshaping-health-and-social-care-with-data#improving-trust-in-the-health-and-care-systems-use-of-data

[2] Floridi, Luciano and Taddeo, Mariarosaria, What is Data Ethics? (November 14, 2016).

[3] https://understandingpatientdata.org.uk/what-we-mean-trustworthy-use-patient-data

[4] https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework-2020

[5] https://understandingpatientdata.org.uk/

[6] https://www.gov.uk/government/publications/better-broader-safer-using-health-data-for-research-and-analysis/better-broader-safer-using-health-data-for-research-and-analysis

[7] Graham M. Data for sale: trust, confidence and sharing health data with commercial companies. J Med Ethics. 2021 Jul 30:medethics-2021-107464. doi: 10.1136/medethics-2021-107464.

Share:

More Posts

Send Us A Message