United Kingdom
New data laws unveiled to improve public services and boost UK economy by £10 billion
- The Data Use and Access Bill will unlock the secure and effective use of data for the public interest, without adding pressures to the country’s finances.
- The bill has three core objectives: growing the economy, improving UK public services, and making people’s lives easier. The measures will be underpinned by a revamped Information Commissioner’s Office, with a new structure and powers of enforcement – ensuring people’s personal data will be protected to high standards.
- Some of its key measures include cutting down on bureaucracy for our police officers; and make patients’ data easily transferable across the NHS so that frontline staff can make better informed decisions for patients more quickly.
- The Bill will legislate on digital verification services, meaning companies who provide tools for verifying identities will be able to get certified against the government’s stringent trust framework of standards, and receive a ‘trust mark’ to use as a result.
X to weaken block function despite harassment and privacy fears
- X has confirmed that anyone will be able to view a user’s public posts, even if they have been blocked by that user. The changes mean blocking somebody will simply stop them from being able to reply to a post, like it, repost it or share it.
- X claims the change will protect people against attempts to ‘share and hide harmful or private information’ about them.
- Critics argue it will empower ‘stalkers and harassers’ by making it much easier for them to see content posted by their targets.
United States
Netflix Faces Invasion of Privacy Suit for Outing Fertility Doctor’s Secret Children in ‘Our Father’ Documentary
- Three women are suing Netflix in federal court over “Our Father,” a 2022 documentary about Donald Cline, an Indiana fertility doctor who secretly fathered 94 children. The suit is for “public disclosure of private facts,” arguing that the documentary outed them as Cline’s “secret children.”
- The film included shots of the 23andMe website, which listed the names of three women who had wished to remain anonymous.
- Judge Tanya Walton Pratt allowed two of the women to proceed to trial, noting that “the method by which Defendants intruded on Plaintiffs’ privacy allowed hundreds of millions of people worldwide to see their names in the Trailer and in the Film” and the story involved precisely the sort of highly intimate information that can create harm if exposed.
Europe
AI Firms’ Training Arguments Get a Lifeline From EU Privacy Case
- An Oct. 4 decision from the Court of Justice of the European Union and guidance from the European Data Protection Board four days later confirm that a company’s “purely commercial” concerns can constitute a legitimate interest under the EU’s General Data Protection Regulation.
- The court decision and guidance will support companies’ efforts to argue their large language model training complies with European law, even though AI was not addressed specifically.
- As companies increasingly collect data in complicated ways, this makes it incredibly difficult to obtain consent. For example, AI model developers likely won’t be able to ask for consent from every individual whose personal information is being fed to a large language model for training.
Privacy ombudsman warns against access to migrants’ cells
- The head of Italy’s privacy authority, Pasquale Stanzione, said it was necessary to reflect on a measure in the migrant flow decree recently approved by the government which allows security forces to access the cell phones and other electronic devices of asylum seekers or migrants held at repatriation centres who don’t cooperate in their identification.
- Stanzione expressed concerns over the proportionality of the measure, and preventive asked judicial authorities to examine it. Stanzione stated particular attention must be given to migrant minors, as access to electronic devices is also granted by the legislation in the case of “unaccompanied foreign minors”.
LinkedIn fined $335 million in EU for tracking ads privacy breaches
- LinkedIn has been reprimanded and fined €310 million for privacy violations related to its tracking ads business by Ireland’s Data Protection Commission (DPC) under the European Union’s General Data Protection Regulation (GDPR). The regulator found a raft of breaches, including breaches to the lawfulness, fairness and transparency of its data processing in this area.
- The justifications for the legal basis LinkedIn relied upon to run its tracking ads business were found to be invalid. LinkedIn also did not properly inform users about its uses of their information, per the DPC’s decision.
- The DPC deputy commissioner Graham Doyle said: “The lawfulness of processing is a fundamental aspect of data protection law, and the processing of personal data without an appropriate legal basis is a clear and serious violation of a data subjects’ fundamental right to data protection.”
International
LinkedIn Under Scrutiny for Potential South African Privacy Violations
- A South African Artificial Intelligence Association (SAAIA) has accused LinkedIn of violating local data privacy laws by illegally training its AI models using data from South African residents.
- SAAIA asserts that LinkedIn’s policy changes have enabled it to improve its AI models using user data without explicit consent, which is against the Protection of Personal Information act that prohibits processing of used data without consent.
- SAAIA is calling for the Information Regulator to launch an investigation into the matter.
- A LinkedIn spokesperson has defended the company’s AI training processes, stating that users have the option to opt out.