United Kingdom
Domestic abuse victim data stolen in Legal Aid hack
- The Ministry of Justice has announced that a “significant amount” of private data has been hacked from Legal Aid’s online system, including details of domestic abuse victims.
- The agency’s services were hacked in April and data dating back to 2010 was downloaded, with estimates that more than two million pieces of information were taken. The breach covers all areas of the aid system – including domestic abuse victims, those in family cases and others facing criminal prosecution.
- Justice minister Sarah Sackman told the House of Commons that there was no indication yet that any other government systems had been affected by the breach. The ministry said it was working with the National Crime Agency and the National Cyber Security Centre and has informed the Information Commissioner.
- The Legal Aid Agency’s online digital services, which are used by legal aid providers to log their work and get paid by the government, have since then been taken offline.
Are cyber attacks about to become the norm for British retailers?
- In the last month, cyber attacks have gone from a destructive but uncommon issue to the top of most Brit’s news feeds. Marks and Spencer’s, Co-op, Harrods – plus the international Dior and Coinbase – have all been recent targets, with more unsuccessful or smaller attacks likely to have gone under the radar.
- While retailers have been particularly targeted, what seems to be the connecting thread is data-heavy firms with insecure legacy systems. Retailers collect and process millions of personal records like names, addresses, payment details, shopping habits, with a specific goal to hyper-personalise advertising, which makes them a lucrative target for cyber criminals.
- The recent attacks are unlikely to be the last, and in many ways, they are a glimpse of what is to come. For UK retailers, decades of under-investment in cyber protection, combined with an expansion in the amount of data that firms hold and process, has created a vault of information with a much lower level of security than in other sectors, such as banks. The growing dependence on third party digital services adds yet more weak points, making many firms increasingly exposed.
- In addition, the massive expansion in adopting artificial intelligence plays both sides of the cyber security arms race. According to Cisco’s 2025 cyber security readiness index, while 92% of UK firms use AI to detect or respond to threats, 78% have also suffered AI-related breaches.
- For example, many attackers are now using generative AI to craft more convincing phishing emails, automate intrusion attempts, and even mimic employee communications. AI has also made it easier to exploit what’s known as “shadow AI”, which is the employee use of unapproved tools that lack proper security.
United States
Musk’s DOGE expanding his Grok AI in US government, raising conflict concerns
- Elon Musk’s DOGE team is expanding use of his artificial intelligence chatbot Grok in the US federal government to analyse data, potentially violating conflict-of-interest laws and putting at risk sensitive information on millions of Americans. Grok was developed by xAI, a tech operation that Musk launched in 2023 on his social media platform, X.
- According to three people with knowledge of DOGE’s activities, Musk’s team was using a customised version of the Grok chatbot to sift through data more efficiently. DOGE staff allegedly told Department of Homeland Security officials to use it even though Grok had not been approved within the department.
- If the data was sensitive or confidential government information, the arrangement could violate security and privacy laws. It could also give the Tesla and SpaceX CEO access to valuable non-public federal contracting data at agencies he privately does business with or be used to help train Grok. Concerns include the risk that government data will leak back to xAI, a private company, and a lack of clarity over who has access to this custom version of Grok.
- If Musk was directly involved in decisions to use Grok, it could violate a criminal conflict-of-interest statute which bars officials – including special government employees – from participating in matters that could benefit them financially, according to University of Minnesota professor Richard Painter.
Europe
Commission to offer mid-cap companies data protection relief in simplification plan
- Small and medium-sized enterprises (SMEs) can expect an exemption from some obligations of the GDPR as part of the European Commission’s new rule simplification package.
- Until now, companies with fewer than 250 employees were exempt from certain data privacy rules to reduce their administrative costs. Under the new package, this derogation is extended to “small mid-cap companies”, which can employ up to 500 employees and make higher turnovers. Such companies will only have to keep a record of the processing of the users’ data when it’s considered “high risk”, for example private medical information.
- Defenders state these changes will make the EU’s privacy standards more enforceable and proportionate. By contrast, critics have warned that the Commission’s plan could have unintended consequences.
- Civil society and consumer groups warn that the change risks “weakening key accountability safeguards” by making data protection obligations depend on company size rather than the actual risk to people’s rights, in addition to concerns this could lead to further pressure to roll back other parts of the GDPR. According to these groups, the focus should instead be on better enforcement of existing rules and more practical support for small companies.
Italy’s data watchdog fines AI company Replika’s developer $5.6 million
- Italy’s data protection agency has fined the developer of AI chatbot company Replika €5 million for breaching rules designed to protect users’ personal data.
- Launched in 2017, San Francisco-based startup Replika offers users customised avatars that can have conversations with them. The “virtual friend” is marketed as being able to improve the emotional wellbeing of users.
- Italian privacy watchdog Garante ordered Replika to suspend its service in the country in February 2023, citing specific risks to children. Following an investigation, it found that Replika lacked a legal basis for processing users’ data and had no age-verification system to restrict children from accessing the service, resulting in the fine for its developer, Luka Inc.
- The Italian authority has also announced a separate investigation to assess whether Replika’s generative AI system is compliant with European Union privacy rules, especially around the training of its language model.
International
‘We cannot be left behind:’ How Canada is balancing AI regulation, innovation
- Prime Minister Mark Carney emphasized plans for Canada’s AI transformation while on the campaign trail and recently appointed a federal AI minister to his cabinet who will oversee the implementation of those endeavors. However, regulatory balance is still unclear as provinces begin to address AI safety and governance in the absence of federal action.
- The Parliament of Canada worked toward an answer with proposed AI regulation packaged into the omnibus Bill C-27, which was considered across two legislative cycles. Bill C-27 was ultimately abandoned with the legislative docket cleared ahead of national elections, where the Liberal Party maintained control of the government.
- Although AI regulation is expected to become again a legislative priority with the 45th Parliament beginning on May 26, there is no indication if Bill C-27 will be reintroduced or if lawmakers might opt for a different legislative vehicle.
- In the face of federal inaction, there is potential for fragmentation on AI regulation as provincial governments are beginning to adopt new requirements around various AI development and use, such as Ontario’s Bill 194 (Strengthening Cyber Security and Building Trust in the Public Sector Act) and Quebec’s Law 25, which is a modernization of private-sector privacy law with associated AI provisions.



