On the 16th of May, the news of Google facing a new class-action lawsuit before the High Court became public domain.
The giant tech company is being sued in the High Court for exploiting the NHS patient data of 1.6 million people which was received by the Royal Free NHS Trust without patients’ knowledge or consent.
This article will analyse the relevance of this case for the debate around the usage of AI in the healthcare sector; furthermore, it will try to reflect on whether a balance between developing AI for medical/research purposes and accurate compliance with the GDPR can be found.
Google and DeepMind sued for unauthorized use of NHS medical data
In 2015, an agreement between DeepMind, Google’s subsidiary in the UK, and the Royal Free NHS Trust was stipulated. It regulated the transfer of Royal Free patients’ data to DeepMind to test a smartphone app called ‘Streams’; the app was developed to help clinicians detect acute kidney injuries.
However, Royal Free’s patients were not aware that their data were transferred to Google DeepMind for the purpose of testing the Streams application. In fact, after a long investigation conducted by the Information Commission Officer (ICO), the UK regulatory authority stated that those patients did not consent to the transfer and that the justification offered by Royal Free and DeepMind was unlawful.
Dame Fiona Caldicott, at that time the National Data Guardian at the Department of Health, who has contributed to an investigation into the deal, wrote that she had informed Royal Free and DeepMind that she “did not believe that when the patient data was shared with Google DeepMind, implied consent for direct care was an appropriate legal basis“.
Therefore, considering all the above, it is arguable to sustain that Google, through DeepMind, received, collected and processed patients’ sensitive data without their explicit consent and for a purpose entirely different from the one that justified the original collection of data, providing clinical care.
However, even though the regulator could issue monetary penalties potentially in the of millions of pounds, it chose to recognise a dearth of guidance from the Department of Health in demanding that Royal Free commit to making changes to address its shortcomings.
Moreover, it remains unclear why the ICO did not take further actions; for example, ordering DeepMind to delete patient data or forbid to deal with other NHS Trusts to roll out the app. In fact, in June 2017, DeepMind announced it would have rolled out its Streams app in a second hospital.
Why it matters
On the other hand, the claim against Google and DeepMind can have a huge and substantial impact, not only for the consequences that the giant company may face, but most over as “it should provide some much-needed clarity as to the proper parameters in which technology companies can be allowed to access and make use of private health information“; these are the words of Ben Lasserson, partner at Mishcon de Reya, the law firm representing Mr. Prismall, the claimant of the case.
In this regard, it is worth noticing that this is a claim for misuse of private information, not for breach of confidence under the Data Protection Act. Besides legal technicalities, this represents a huge opportunity to overcome the barrier of necessarily showing evidence of suffering material damages or distress.
Indeed, a similar legal action against Google was blocked last year by a Supreme Court decision over claims that the tech giant secretly tracked millions of iPhone users’ web browsing activity while telling them it was not. That claim failed because the claimant could not prove that the group representing “suffered any material damage or distress”.
The positive health outcomes of Artificial Intelligence
Nevertheless, even with all the above concerns about unlawful use of patients’ personal data, another aspect needs to be considered: the value of AI in the healthcare sector.
Nurses and doctors, who are already using the application as Streams, sustain that it is helping them deliver faster and better care for their patients.
Moreover, in a study published in Nature (McKinney et al. 2020), the AI system demonstrated performance close to that of double reading with the arbitration (statistically non-inferior) and superior to the first reader. Expert research clinicians believe that deploying this technology as a second reader has the potential to 1. ultimately improve patient outcomes through improved accuracy and reduced variability; 2. allow expansion to alternative screening strategies such as biannual or personalised stratified approaches; and 3. reduce time to results, improving patient experience.
However, could we truly sustain that this is sufficient enough to silently tolerate abuses on how our most sensitive data are treated?
The truth is that healthcare data is extremely valuable for AI companies; healthcare providers, insurance agencies, and other stakeholders don’t just store typical personal data (demographics, preferences, and the like). A typical patient’s healthcare file adds data on symptoms, treatments, and other health concerns.
While it can be hard to find a proper balance between these two such vital interests, considering the categories of data in discussion and the right of the individual to be aware of how their own data are treated and by who, I believe there are a few basic principles which are already enshrined in data protection legislation that should always find application in these circumstances:
- Transparency: AI companies must be transparent on how patients’ data are used;
- Purpose limitation: an AI company must have a “deeply rooted” purpose and right to collect that information;
- Data minimisation: the data collected and the purpose of the AI company must be limited by design;
- Patients must be aware of who will receive their data for a purpose different from providing direct clinical care and must have the right to opt-out of being part of a given project.