ICO fines pendant alarm company £100,000 for unlawful marketing calls
- The UK ICO fined TMAC Ltd, a pendant alarm and security systems company, £100,000 for making over 260,000 unsolicited marketing calls to individuals registered with the Telephone Preference Service, in breach of the Privacy and Electronic Communications Regulations (PECR).
- The company used unlawfully obtained personal data, with a director admitting phone numbers were sourced from a previous employer, highlighting serious failures in lawful data acquisition practices.
- Call handlers hid their identity and targeted vulnerable individuals with predatory calls between February and September 2024, including older people, as they claimed to be calling on behalf of local rime and fire prevention initiatives.
- This case reflects enforcement risks where organisations fail to obtain valid consent, screen data against suppression lists, and ensure fair and lawful processing, particularly in direct marketing activities involving personal data.
NHS staff refuse to work on Palantir’s £330 deal platform over ethical concerns
- A growing number of NHS data analysts and health officials are actively refusing to work on Palantir’s £330m contract to build a Federated Data Platform (FDP), awarded in 2023 to collate NHS operational data including waiting lists, patient information and staffing, objecting to the company’s US defence work and its founders’ political alignment with Donald Trump.
- Internal concern extends as a government briefing prepared for the Health Secretary acknowledged that Palantir’s public profile was likely obstructing FDP adoption and making it harder to expand the platform’s scope, including to GP data.
- Despite the controversy, 123 out of 205 NHS hospital trusts in England are using the FDP, with the project holding the highest green delivery rating from the Treasury’s infrastructure authority.
United States
Meta ordered to pay $375m for misleading child safety on platforms
- A New Mexico jury has ordered Meta to pay $375m in civil penalties after finding the company violated state consumer protection law by misleading the public about the safety of its platforms (Facebook, Instagram and WhatsApp) for children, marking the first successful state lawsuit of its kind against Meta.
- During the seven-week trial, internal Meta documents and whistleblower testimony revealed the company was aware that its recommendation algorithms were showing underage users sexually explicit content and exposing them to predators, with internal research at one point finding 16% of Instagram users had encountered unwanted sexual content in a single week.
- The case has significant implications for platform accountability and finds itself alongside similar lawsuits in the US, including a separate Los Angeles trial examining whether Meta and Google intentionally designed addictive platforms for minors.
- Meta, which intends to appeal, has indicated that steps were taken to improve child safety, including the rollout of Teen Accounts on Instagram and a recent feature alerting parents when children search for self-harm content.
Iranian hacktivists breach FBI director’s personal email in geopolitical crisis
- Iranian hacktivist group Handala Hack published over 300 personal emails and images belonging to FBI Director Kash Patel, claiming the breach as retaliation for the FBI seizing Handala-linked domains and offering a $10m reward for information on group members.
- The FBI confirmed the incident but noted the leaked data is historical (2010-2019) and contains no government information.
- Cybersecurity experts warn that high-profile individuals are increasingly targeted through personal, unmanaged accounts and devices, which lack the enterprise-grade protections of institutional networks and offer a softer entry point for state-sponsored threat actors.
Anthropic accidentally leaks nearly 2,000 Claude Code source files in human error incident
- Anthropic inadvertently exposed part of the internal source code for its Claude Code coding assistant after a mispackaged software update pointed to an archive of nearly 2,000 files and 500,000 lines of code, which were copied to GitHub and viewed over 29 million times on X before Anthropic issued copyright takedown requests.
- The company confirmed no sensitive customer data or credentials were exposed, characterising the incident as a packaging error rather than a security breach, although the leaked files reportedly contained commercially sensitive material including tools and instructions for deploying Claude models as coding agents, potentially benefiting competitors such as OpenAI and Google.
- This is the second data leak affecting Anthropic in recent weeks, following a separate incident in which thousands of internal files were found stored on publicly accessible systems. The incident raises reputational as well as security questions for Anthropic at a commercially sensitive moment: paid subscriptions to Claude have more than doubled this year, and Claude Code has become a key product; making the integrity of its architecture increasingly valuable and its accidental exposure more significant.
Europe
European Commission confirms second data breach of 2026 as hackers claim 350GB stolen
- The European Commission confirmed it was targeted in a cyberattack affecting cloud infrastructure hosting its Europa.eu web presence, with early findings suggesting data was exfiltrated although public-facing websites remained operational and internal systems were reportedly unaffected.
- The ShinyHunters cyber extortion group claimed responsibility, alleging theft of over 350GB of data including mail server dumps, databases, confidential documents and contracts, with reports indicating the attackers targeted the Commission’s AWS accounts via a compromised account or security misconfiguration rather than an AWS vulnerability.
- AWS confirmed its services operated as intended, highlighting that the breach most likely resulted from an access control or configuration failure on the Commission’s side.
- This is the second confirmed breach affecting the Commission in 2026, following a February incident in which CERT-EU discovered evidence of an intrusion potentially exposing staff personal data.
International
Indonesia mandates child account removal on platforms as Meta and Google face non-compliance action
- Indonesia has introduced regulations requiring social media platforms deemed high-risk to deactivate accounts belonging to users under 16, with Meta and Google immediately identified as non-compliant.
- Platforms classified as high-risk including Meta, Google, TikTok and Roblox are assessed against criteria such as the ability to interact with strangers, addictive design features and psychological risks.
- The move follows Australia’s social media ban for under-16s last year, highlighting a broader international shift towards hard regulatory intervention on child online safety, particularly in markets with high youth internet penetration. Indonesia has around 70 million children under 16 and internet usage among Gen Z reaching nearly 88%.
- While both Meta and Google stated last week that they had put safeguards in place for children, Indonesian authorities characterised these as insufficient, emphasising the recurring tension between platforms’ self-regulatory claims and governments’ privacy expectations.
For the latest updates on data breach 2026 incidents, ICO fines for unlawful marketing calls, the Meta Platforms child safety ruling, the European Commission cyberattack, and global data protection developments, visit our Data Protection News hub.



