United Kingdom
NHS patients dying because of problems sharing medical records, coroners warn
- 36 warnings have been issued this year by coroners over inadequate sharing of NHS patient information, with some patients dying as clinicians could not access important details about their needs.
- The problems are caused by conflicting IT systems, restricted access to medical records and obstacles to sharing information outside the NHS.
- Labour has announced plans to store each NHS patient’s health data in one place, and for patient records to be made readily available via standardised information systems across the NHS. The plans apply to England, with health mostly devolved in the rest of the UK.
- Privacy campaigners are concerned that Labour’s plans threaten patient confidentiality. A coordinator at medConfidential has said that “the new central care records will have written notes from your GP accessible wherever the NHS logo is seen”. They called for patients to be able to see in the NHS app “when and where records were snooped on”.
Companies building AI-powered tech are using your posts. Here’s how to opt out
- Default opt-in to AI-powered search engines when using social media is an industry issue. A report from the Federal Trade commission noting that on the data practices of nine social media and streaming platforms including WhatsApp, Facebook, YouTube, and Amazon found that nearly all of them fed people’s personal information into automated systems with no comprehensive or transparent way for users to opt out.
- In most cases, it is possible to opt out of published data being fed into automation machines through platforms websites, but this depends on the region users are in and the individual platform.
- On both YouTube and Reddit, there are no visible settings to protect users posts or content from being used to train the companies’ own AI or other firms’ models.
United States
OpenAI and others seek new path to smarter AI as current methods hit limitations
- Artificial Intelligence (AI) companies are seeking to overcome unexpected delays and challenges in the pursuit of ever-bigger large language models by developing training techniques that use more human-like ways for algorithms to “think”.
- After the release of ChatGPT two years ago, technology companies have publicly maintained that “scaling up” current models through adding more data and computing power will consistently lead to improved AI models. Now, some prominent AI scientists are speaking out on the limitation of this “bigger is better” philosophy.
- Researchers have been running into delays and bad outcomes, with ‘training runs’ for large models more likely to have a hardware-induced failure. The amount of data used by large language models is also very high, as easily accessible data has been exhausted by AI models.
Europe
Meta fined nearly €800mn for breaking EU law over classified ads practices
- Meta has been fined nearly €800mn after regulators accused Facebook’s parent company of stifling competition by “tying” its free Marketplace services with the social network.
- The EU’s outgoing competition chief has said that linking Facebook with its classified ads service Meta has “imposed unfair trading conditions” on other providers.
- Meta has said it will appeal the decision, adding “The European Commission’s decision provides no evidence of competitive harm to rivals or any harm to consumers.”
- The EU’s antitrust probe into Meta was launched following accusations from rivals in 2019 that Meta was abusing its dominant position by offering free services while profiting from data collected on the platform.
Denmark: AI-powered welfare system fuels mass surveillance and risks discriminating against marginalized groups – report
- The Danish welfare authority, Udbetaling Danmark (UDK), risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalized racial groups through its use of artificial intelligence (AI) tools to flag individuals for social benefits fraud investigations.
- Amnesty International’s Researcher on Artificial Intelligence and Human Rights has said that “This mass surveillance has created a social benefits system that risks targeting, rather than supporting the very people it was meant to protect.”
- The fraud-detection algorithms used enable extensive collection and merging of personal data from public databases of millions of Danish residents, including information on residency status and movements, citizenship, place of birth, and family relationships — sensitive data points that can also serve as proxies for a person’s race, ethnicity, or sexual orientation.
International
Encrypted messaging app developer moves out of Australia after police visit employee’s home
- The founder of Session, an encrypted messaging app, left Australia following a police visit to an employee’s home. The founder said they had left Australia because of the “hostile” stance against developers building privacy-focused apps.
- The Oxen Privacy Tech Foundation (which created Session) has had Victoria police and Australian federal police approaching employees via chat messages, letters, phone calls, and approaching the apartment of an employee last year.
- A spokesperson for the Australian Federal Police confirmed it “is aware” of the app, and “has seen the use of Session by offenders while committing serious commonwealth offences”.
- Under Australian anti-terrorism laws passed in 2018, law enforcement can issue notices requiring developers to assist in investigations, but these powers have rarely been used.