Data Protection News Update 09 February 2026

United Kingdom

ICO opens formal data protection investigation into Grok AI

  • The UK ICO has opened formal investigations into X Internet Unlimited Company and X.AI over the processing of personal data by the Grok AI system, following reports that it has been used to generate non-consensual sexualised images and videos of real individuals, including children.
  • The ICO will assess whether personal data was processed lawfully, fairly and transparently, and whether sufficient technical and organisational safeguards were built into Grok’s design to prevent the creation of harmful manipulated content.
  • The investigation will be coordinated with Ofcom and international regulators; if violations are found, the ICO can take enforcement action, including fines of up to £17.5m or 4% of global annual turnover.
  • The case has escalated at EU level, with French cybercrime prosecutors and Europol reportedly raiding X’s Paris office and interviewing senior executives, including Elon Musk, as part of a broader European investigation into Grok’s deepfake generation and potential breaches of child protection.

GP surgery reprimanded for excessive disclosure of patient medical records to insurance

  • The ICO has reprimanded Staines Health Group after it disclosed 23 years of a terminally ill patient’s medical records to an insurance company, far exceeding a request for five years of history. The records were sent directly to the insurer instead of to the patient for review, and the patient believes the excessive disclosure negatively affected their insurance payout.
  • The ICO found failures in the surgery’s data protection practices, including the absence of written procedures for handling insurance requests and a lack of regular training for staff.
  • Following the incident, the GP practice introduced new processes and training, but the ICO reinforced that all organisations handling health data must apply strong safeguards and procedures when sharing sensitive personal information.

United States

TikTok’s US restructuring and updated terms raise concerns over sensitive data collection

  • TikTok’s creation of a new US-based joint venture to avoid a nationwide ban has led to scrutiny over how the platform collects, processes, and governs user data.
  • Updates to TikTok’s terms and conditions have increased user concern by explicitly confirming types of data TikTok may collect about the users, including the collection of precise location data (unless users opt out), as well as special category data such as racial or ethnic origin, sexual orientation, immigration status, and financial information among others.
  • This restructuring raises questions about whether data-driven moderation and algorithmic controls are being used in ways that affect freedom of expression and user trust, alongside potential compliance issues under US state privacy laws.
  • Growing concerns over privacy, data governance and perceived censorship have contributed to a rise in US users deleting the app, while regulators including California authorities are now looking into whether TikTok’s operations breach domestic legal and data protection requirements.

AI-assisted cloud attack compromises AWS environment in minutes

  • An attacker compromised an AWS cloud environment and went from initial access to full administrative privileges in under 10 minutes, with multiple indicators suggesting the extensive use of AI to automate processes.
  • Evidence of AI assistance included LLM-generated code with unusual elements such as Serbian-language comments, hallucinated AWS account IDs, and references to non-existent GitHub repositories.
  • After gaining administrative access, the attacker exfiltrated secrets, logs, source code and internal data, then abused Amazon Bedrock and GPU resources to secretly run multiple AI models, a type of attack known as “LLMjacking.”

Europe

Spain to ban social media for children under 16

  • Spain plans to ban social media use for children under 16, pending parliamentary approval, as part of broader measures to protect minors from harmful online content and platform-driven risks.
  • The proposal would require platforms to implement effective age-verification systems, hold executives accountable for illegal or harmful content, and criminalise algorithmic manipulation that amplifies unlawful material.
  • The move follows similar action in Australia and growing momentum across Europe, with France, Denmark, Austria and the UK all considering age-based restrictions on social media access.
  • The announcement comes amid heightened scrutiny of platforms such as X, TikTok and Instagram, including investigations into AI-generated sexualised content, though the Spanish plan faces political hurdles due to the government’s lack of a parliamentary majority.

International

Ontario sets privacy-first framework for AI in health care

  • Ontario’s Information and Privacy Commissioner (IPC) has released new guidance aimed at managing the growing use of AI in health care, including Principles for the Responsible Use of AI and sector-specific guidance on AI medical scribes.
  • The guidance responds to pressure on clinicians as they face significant administrative burdens and see AI as a potential efficiency tool, while simultaneously expressing deep concern about privacy, liability, bias, and the lack of clear standards for evaluating AI products used in clinical settings.
  • To foster innovation but also strengthen public trust in Ontario’s health care system, data privacy experts call for stronger evidence standards and clearer rules governing access to medical data for research and commercial development without impacting consent or patient rights. 

For the latest updates on data protection investigation into Grok AI, global privacy enforcement, AI misuse, social media regulation and major data governance developments across the UK, US, Europe and beyond visit our Data Protection News hub.

Share:

More Posts

Send Us A Message