United Kingdom
- The UK’s Information Commissioner’s Office (ICO) has issued guidance on the use of AI recruitment tools following a wide-ranging review.
- The guidance is in response to ICO concern over the impact of AI on job-applicants privacy and information rights. In particular, some AI tools were not processing information fairly (allowing recruiters to filter candidates with protected characteristics); and collecting and storing far more personal information than necessary without candidates’ knowledge.
- The ICO Director of assurance, Ian Hulme, has said “Our report signals our expectations for the use of AI in recruitment, and we’re calling on other developers and providers to also action our recommendations as a priority. That’s so they can innovate responsibly while building trust in their tools from both recruiters and jobseekers”
United States
US Supreme Court mulls Facebook bid to escape securities fraud suit
- U.S. Supreme Court justices heard Facebook’s appeal of a lower court’s decision allowing the 2018 class action led by Amalgamated Bank to proceed. The federal securities fraud lawsuit brought by shareholders who accused the social media platform of misleading them about the misuse of its user data.
- The plaintiffs accused Facebook of misleading investors in violation of the Securities Exchange Act, a 1934 federal law that requires publicly traded companies to disclose their business risks. They claimed the company unlawfully withheld information from investors about a 2015 data breach involving British political consulting firm Cambridge Analytica that affected more than 30 million Facebook users. The plaintiffs argue that Facebook portrayed the risk of data breaches as purely hypothetical, rather than something that actually happened.
- The Supreme Court’s ruling is expected by the end of June.
US judicial panel to develop rules to address AI-produced evidence
- The U.S. Judicial Conference’s Advisory Committee on Evidence Rules agreed to develop a rule to regulate the introduction of artificial intelligence-generated evidence and begin work on a policy to potentially help judges deal with claims that a piece of audio or video evidence is a “deep fake.”
- The meeting came amid broader efforts by federal and state courts nationally to address the rise of generative AI, including programs like OpenAI’s ChatGPT that are capable of learning patterns from large datasets and then generating text, images and videos.
- The rule will be designed to address concerns about the reliability of the processes used by computer technologies to make predictions or draw inferences from existing data, akin to issues courts have long addressed concerning the reliability of expert witnesses’ testimony.
Europe
What to know about the EU Cyber Resilience Act
- The EU Cyber Resilience Act (CRA) was adopted by the EU October 10th.
- The CRA imposes cybersecurity and vulnerability handling requirements on certain products with digital elements. These are wired or wireless products connected to the internet, including software or hardware components placed on the market separately. The products are likely to include laptops; mobile devices, smartphones; microprocessors; routers; and smart home devices. The bulk of the requirements apply in 3 years’ time.
- The requirements for these products means they must be designed, developed and produced to ensure an appropriate level of cybersecurity based on the risks. Additionally, categories of products with digital elements trigger specific requirements; manufacturers have reporting obligations; and there are fines for non-compliance with the CRA.
- Manufacturers of products with digital elements should check whether the products they manufacture that are sold in the EU are likely to be caught by the CRA, and if so in which likely categories.
Italy’s privacy watchdog raps Intesa over data breach incident
- Italy’s data protection authority announced Intesa Sanpaolo had underestimated the seriousness of a data breach incident involving thousands of customers, widely reported to include Prime Minister Giorgia Meloni. The case involves an employee who allegedly accessed the data of about 3,500 clients.
- Italy’s data protection authority said in a statement on Tuesday that the bank had not adequately informed it about the extent of the breach, which became apparent later due to press reports and was only confirmed subsequently by Intesa. It said the potential consequences of the breach had included disclosure of information on the financial status of individuals and reputational damage.
- The authority said it would assess the adequacy of the security measures the bank has put in place and ordered it to provide feedback within 30 days.
International
Alberta announces new privacy legislation and adds heftier fines for violations
- The Alberta government announced new legislation that will protect the privacy of Albertans and add heftier fines for breaking privacy laws.
- Bill 33, the Protection of Privacy Act, is one of two new bills proposed to replace the existing Freedom of Information and Protection of Privacy Act (FOIP Act).
- The legislation would force public bodies to pay more attention to how they manage personal information and introduce requirements for handling the data in their possession. If an Albertan’s information is used in an automated system to generate content, decisions, recommendations, or predictions, public bodies must notify them.
- Bill 33 would prohibit individuals and organizations from knowingly violating rules, including collecting, using, or disclosing personal information without consent, attempting to re-identify a person based on non-personal data making false statements, or obstructing or failing to comply with the Office of the Information and Privacy Commissioner.
South Korea fines Meta $15 million over data breach
- South Korea’s Personal Information Protection Commission (PIPC) has imposed a fine of 21.6232 billion won (approximately USD $15.67 million) on Meta Platforms, Facebook’s parent company, citing major breaches of the country’s Personal Information Protection Act (PIPA).
- The PIPC investigation said it found that Meta collected highly sensitive information from about 980,000 South Korean users, including details on political and religious beliefs, as well as same-sex marital status.
- According to the PIPC’s official statement, Meta shared this data with around 4,000 advertisers, who used it for targeted advertising based on topics like religious affiliations, gender identities, and affiliations with groups such as North Korean defectors.