Data Protection News Update 12 May 2025

United Kingdom

UK’s Legal Aid Agency experiences cyberattack

  • The Legal Aid Agency (LAA), an executive agency of the United Kingdom’s Ministry of Justice, has experienced a cyberattack. According to the LAA, the payment information of legal aid providers may have been compromised, though it is not yet confirmed whether any data was accessed.
  • The LAA oversees billions of pounds worth of legal funding, employs approximately 1,250 individuals, and operates the nation’s Public Defender Service. Notably, approximately 2000 legal providers supply services for civil and criminal legal aid under contracts with the LAA.
  • Andrew Costis, Engineering Manager of the Adversary Research Team at AttackIQ, comments, “This incident is the latest in a series of major cybersecurity breaches to have rocked the U.K. in the past week, including the breaches of retail giants M&S, Co-op, and Harrods. The increase in high-profile attacks over such a short period of time has raised serious concerns about the U.K.’s cyber defence infrastructure.”

AI model trained on de-identified data from 57 million people

  • As part of a pilot study operated within the NHS England Secure Data Environment (SDE), a generative artificial intelligence (AI) model called Foresight is being trained on the de-identified, routinely collected NHS data of 57 million patients in England, in an effort to predict potential health outcomes for different patient groups.
  • The Foresight AI model is a collaboration between NHS England, University College London (UCL), King’s College London (KCL) and several other health and care research organisations.
  • The study aims to predict medical events such as hospitalisation, heart attacks, and new diagnoses. Early prediction of these events could enable more targeted interventions and support a shift towards large-scale preventative healthcare.
  • By using data that represents the entire population of England, the model will be able to predict outcomes across all demographics, including those with rare conditions.
  • Currently, the model is using recent data—from November 2018 to the end of 2023—across a limited set of datasets and is being made available specifically for COVID-19 research.

United States

Google agrees to pay Texas $1.375bn over data-privacy claims

  • Google has agreed to pay $1.375 billion (£1 billion) to settle a lawsuit brought by Texas Attorney General (AG) Ken Paxton over allegations of violating user data privacy. The settlement resolves two lawsuits related to Google’s handling of Incognito mode, location tracking, and biometric data. Google has not admitted any wrongdoing.
  • “This settles a raft of old claims, many of which have already been resolved elsewhere,” said Google spokesperson José Castañeda. “We’re pleased to put them behind us, and we will continue to build strong privacy controls into our services.”
  • AG Paxton said, “In Texas, Big Tech is not above the law. For years, Google secretly tracked people’s movements, private searches, and even their voiceprints and facial geometry… I fought back and won.”
  • In 2022, Paxton sued Google twice, accusing the company of collecting Texans’ facial and voice data without consent, tracking users even when location services were turned off, and misleading users about the privacy of Incognito mode.
  • The settlement does not require Google to make any product changes. Last year, Meta (Facebook and Instagram’s parent company) also paid $1.4 billion to settle similar allegations from Paxton regarding facial recognition data.

Europe

EU to Track Crypto Transfers Under New AML Rules: Eurogroup President

  • The European Union (EU) is seeking to track cryptocurrency transfers. Specifically, Eurogroup President Paschal Donohoe explained that the EU seeks expand its Anti-money laundering (AML) regulation “to record data on the senders and recipients of [crypto] funds”.
  • Such an expansion of AML regulation is “essential” according to Donohoe, who added that the EU wants to move such regulation “beyond the more traditional forms of financial transfer” and allow for “the transparency of crypto asset transfers.”
  • This new regulation, which will be rolled out from July 1, 2027, will prohibit cryptocurrency service providers from providing or interacting with anonymous wallets and privacy coins. It will also require exchanges and other centralised entities (e.g., custodial wallets) to identify users of self-hosted wallets who use their services.
  • For many within the cryptocurrency industry, such provisions are “lopsided toward surveillance,” as Monero (a privacy-focused cryptocurrency) developer Riccardo Spagni stated. “From 1 July 2027, EU‑licensed exchanges and custodians will be barred from handling privacy coins such as Monero,” he said. “This goes well beyond the risk‑based approach normally applied to cash, prepaid cards, or even end‑to‑end‑encrypted messaging.”

International

Notes from the Asia-Pacific region: OPC releases draft guidance on Privacy Act amendment

  • On 1st May 2026, an amendment to the New Zealand Privacy Act 2020 will come into force that expands privacy notice obligation to include indirect collections of personal information.
  • The New Zealand Office of the Privacy Commissioner (OPC) has released draft guidance on the new Information Privacy Principle 3A (IPP 3A), clarifying that generic privacy notices won’t suffice; organisations must proactively and specifically notify individuals after indirect data collection.
  • Currently, most other international privacy laws already require organisations to provide a privacy notice in relation to the collection of personal information from third parties — for example, Australian Privacy Principle 5 and Article 14 of the EU General Data Protection Regulation.
  • This amendment is intended primarily to ensure New Zealand retains its EU adequacy status.

Brazil enacts AI-focused law to combat psychological violence against women

  • On 24th April 2025, Brazil passed Law No. 15.123/2025, which introduces increased penalties in cases of psychological violence against women when committed using artificial intelligence or other technological tools capable of altering images, sounds, or videos.
  • The law amends Article 147-B of the Penal Code of Brazil, which defines psychological violence against women. It makes the use of AI in the commission of such offenses an aggravating factor, allowing up to a 50% increase in penalties when offenses involve tools like deepfakes or other synthetic media.
  • Brazil’s Law No. 15.123/2025 aligns with a broader global movement to regulate the harmful applications of AI technologies. Similar concerns are reflected in initiatives like the EU AI Act, which imposes transparency obligations for deepfake content, and legislative efforts in the U.S. targeting the malicious use of synthetic media.

Share:

More Posts

Send Us A Message