United Kingdom
UK’s New Digital IDs Raise Security and Privacy Fears
- Security experts have raised concerns over the UK government’s plans for a digital ID wallet. This wallet would enable British citizens to store all government-issued documents on a single location on their smartphones, increasing the security and convenience of using identification documents for everyday purposes such as proving age and claiming benefits.
- The GOV.UK Wallet will launch in Summer 2025 and will initially store Veteran Cards and Driving Licences, and will be rolled out to all forms of identification by the end of 2027.
- The UK Department for Science, Innovation and Technology (DSIT) highlighted the security of biometric protections built into modern smartphones, compared to simply using physical documentation.
- The Associate Director of Data Privacy at Bridewell, noted: “If a centralized digital ID system were compromised, it wouldn’t just result in leaked phone numbers or email addresses. A major breach would likely expose complete identities, leading to identity theft, fraud, and lasting harm to victims’ financial and personal lives.”
United States
A view from Brussels: Europe, Trump II and data transfers
- The U.S. President Donald Trump made an executive order rescinding “harmful executive orders and actions” adopted under the previous administration. The order demanded all three Democratic members of the U.S. Privacy and Civil Liberties Oversight Board (PCLOB) resign by close of business 23 Jan. or be fired.
- The PCLOB is an independent watchdog formed to scrutinize U.S. surveillance practices and review their alignment with privacy and civil liberty requirements, including in the trans-Atlantic context, underpinning some of the U.S. The U.S. Department of Justice also saw the dismissal of several top-level career officials in national security. The purge of these officials, whilst not directly specific to data protection, sends a signal to European officials over the security of privacy in the US.
- However, there have been no firm signals from the Trump camp that EU-U.S. data transfers would be at risk. The PLCOB needs a quorum of three members to function and could be back at that level very soon. There is also no indication the new administration plans to do away with other building blocks that support the Data Privacy Framework (DPF) implementation, which has bridged several administrations with continued bicameral and bipartisan support.
LinkedIn accused of using private messages to train AI
- A US Lawsuit, filed on behalf of LinkedIn Premium users, has accused LinkedIn of sharing their private messages with other companies to train AI models. The lawsuit is seeking $1,000 (£812) per user for alleged violations of the US federal Stored Communications Act, as well as an unspecified amount for breach of contract and California’s unfair competition law.
- The lawsuit alleges that LinkedIn ‘quietly’ introduced a privacy setting in August last year. This setting automatically opted users into a programme that allowed third parties to use their personal data to train AI. The lawsuit further alleges LinkedIn tried to conceal its actions by changing its privacy policy to say information collected from users could be used to train AI.
- The lawsuit noted, “This behaviour suggests that LinkedIn was fully aware that it had violated its contractual promises and privacy standards and aimed to minimise public scrutiny”.
Europe
Meta’s revised paid ad-free service may breach EU privacy laws, consumer group says
- The European Consumer Organisation (BEUC) said Meta Platforms’ revised no-ads subscription service may still breach EU consumer and privacy laws in addition to antitrust rules.
- Meta rolled out the fee-based service for Facebook and Instagram in 2023, and subsequently offered European users the option to receive less personalised ads. The BEUC previously complained about the fee-based service, and said that the changes made to it were cosmetic.
- BEUC Director General said, “In our view, the tech giant fails to address the fundamental issue that Facebook and Instagram users are not being presented with a fair choice and is making a weak bid to argue it is complying with EU law while still pushing users towards its behavioural ads system.”
- A Meta spokesperson disagreed with BEUC’s conclusions, saying November’s changes meet EU regulator demands and go beyond what is required by EU law.
ECHR rules France violated privacy rights through ‘marital duty’ divorce fault
- The European Court of Human Rights (ECHR) ruled that France violated Ms. H.W’s right to respect for private and family life, her right to sexual freedom, and right to bodily autonomy by granting a divorce on the grounds that she failed to perform her “marital duty” to have sexual relations with her husband.
- It was found that the very existence of a marital obligation interferes with Ms. H.W.’s right to respect for private life, her sexual freedom, and her right to bodily autonomy under Article 8 of the European Convention. The Court highlighted that such interference with private life does not align with France’s positive duty as a contracting state to prevent domestic and sexual violence under international law.
- A joint statement published by two French feminist groups applauded the decision, highlighting the importance of abolishing the French legal concept of a “conjugal duty” to protect women from rape.
International
Analysing South Korea’s Framework Act on the Development of AI
- South Korea has enacted a comprehensive regulatory law on artificial intelligence, the Framework Act on the Development of Artificial Intelligence and Establishment of Trust Foundation or the AI Framework Act. The act will take effect from the 22nd of January 2026.
- The act aims to protect citizens’ rights and dignity, improve their quality of life, and strengthen national competitiveness by regulating fundamental matters necessary for the sound development of AI and establishment of a foundation of trust. The law provides a basis for the government to establish ethical AI principles and allows educational institutions, research institutions and AI businesses to establish private autonomous AI ethics committees to comply with these principles.
- The act mandates transparency obligations, including prior notification to users when providing products or services using high-impact or generative AI, and labelling requirements for generative AI outputs. The act includes an obligation to evaluate potential impacts on individuals’ fundamental rights in advance when providing products or services using high-impact AI, and national agencies must prioritize the use of products or services that have undergone the fundamental rights impact assessment.
Notes from the Asia-Pacific region: ‘Right balance’ needed on New Zealand biometrics code
- The Office of the Privacy Commissioner of New Zealand (OPC) announced its decision to issue a Biometrics Processing Privacy Code, following an extensive period of consultation. The OPC has the power to form codes of practice that have the force of law under New Zealand’s privacy regulatory regime.
- The Biometric Processing Privacy Code requires organisations to assess whether collection is necessary and proportionate — that is, the benefits to the organisation, individual and/or public outweigh any privacy risk as well as the impact on Māori — before carrying out any biometric processing.
- The code mandates that organisations notify people (clearly, before or at the time of collection) that they are carrying out biometric processing, for what purpose, and what alternatives are available. The code restricts some uses of high-privacy risk biometric processing (such as using it to determine emotions, infer health conditions, or categorize people according to race, ethnicity, disability, gender or sexual orientation).
- The code will be retrospective, with existing biometric processing to be compliant within nine months of it taking effect.