Data Protection News Update 17 February 2025

United Kingdom

UK Government under fire over public sector guidance on using overseas clouds

  • The Department for Science, Innovation, and Technology (DSIT) is facing backlash for its overseas data guidance allowing public sector organisations to use cloud services hosted in data centres outside the UK, citing benefits in cost, sustainability, IT resilience, and competition.
  • DSIT stated that the guidance reinforces the government’s existing ‘cloud-first policy’ launched in 2013, which initially permitted government organisations to store data overseas.
  • This guidance has attracted widespread criticism:
    • From an innovation perspective, critics argue the guidance contradicts the government’s recently launched ‘AI opportunities action plan’ and may discourage national technological innovation.
    • From an economic standpoint, British suppliers are concerned about reduced business opportunities for national cloud providers and disincentivizing technology investment in the UK.
    • Data protection and security concerns have also been raised, particularly regarding the processing of sensitive government data outside the UK, where other countries’ data protection laws may be weaker. Mark Boost, CEO of UK cloud provider Civo, said: “For government departments handling sensitive data – including on health and national security – this is an unacceptable level of risk.”

UK ICO responds to amendments to the draft Data (Use and Access) Bill

  • The ICO updated its response to the House of Lords’ amendments to the draft Data (Use and Access) Bill, supporting it as a step toward improving data protection in the UK and ensuring regulatory clarity. The ICO initially responded to the Bill when first introduced in October 2024.
  • The Bill introduces a public interest test for processing data for scientific research; with the ICO promising future guidance on the meaning of public interest.
  • For children’s data, the Bill introduces “higher protection matters,” requiring businesses to consider additional risks when handling children’s data, but the ICO seeks clarification on the interpretation of these protections.
  • The Bill extends the “soft opt-in” marketing rule to charities, allowing them to send marketing to individuals with an existing relationship. However, the ICO cautions charities to undertake careful implementation.
  • The Bill requires the ICO to develop new codes of practice for automated decision-making, AI, and edtech, through secondary legislation.
  • The Bill also introduces new ICO responsibilities for regulating web crawlers and introduces new deepfake-related criminal offences; the ICO seeks further assurance on their compatibility with the European Convention of Human Rights. The Bill is now under review by the House of Commons, with finalisation expected soon.

United States

US House Republicans organise a privacy working group

  • The U.S. House Committee on Energy and Commerce has formed a data privacy working group within the Commerce, Manufacturing, and Trade subcommittee. The group’s goal is to “build a coalition” to draft a new comprehensive consumer privacy bill.
  • Previous efforts to pass a comprehensive data privacy bill have failed for years, resulting in the U.S. lagging behind other global regulators in terms of protections.
  • The House and Senate Commerce committees attempted to pass a bipartisan data privacy bill last year but faced opposition from House Republican leadership. In response, states have attempted to fill the gaps, forcing tech companies to follow a patchwork of policies.
  • The working group is composed entirely of Republic representatives, reflecting the need for a more unified approach to consumer privacy legislation. However, as a result, bipartisan approval may take time, as debates over existing privacy laws could delay progress on a new bill.

Europe

EU ditches plans to regulate tech patents, AI liability, online privacy

  • The European Commission (EC) scrapped three draft rules regulating technology patents, artificial intelligence (AI) liability and consumer privacy on messaging apps, stating that they did not expect European Union lawmakers and countries to approve them.
  • The draft rule on patents would have regulated standard essential patents used in telecom equipment, mobile phones, computers, connected cars, and smart devices. Technology companies had varied responses to the withdrawal. A representative from Nokia cited the rule’s “adverse impact on the global innovation ecosystem,” while the Fair Standards Alliance—a prominent lobbying group with members including BMW, Tesla, Google, and Amazon—opposed the Commission’s withdrawal decision.
  • The AI Liability Directive would have allowed consumers to sue for harm caused by AI technology providers, developers, or users.
  • The ‘ePrivacy regulation’ would have subjected WhatsApp and Skype to the same user privacy rules that apply to telecom providers. Moving forward, EU executives said they would assess whether to propose new rules regarding patents and AI, while the Commission deemed the ePrivacy proposal entirely “outdated” due to recent legislation.

International

Law firm restricts AI after ‘significant’ staff use

  • International law firm Hill Dickinson has blocked access to multiple artificial intelligence (AI) tools following a “significant increase in usage” by its staff.
  • During a seven-day period in January and February, the firm detected more than 32,000 visits to ChatGPT and more than 3,000 visits to DeepSeek, a Chinese AI chatbot service recently banned by several countries due to national security concerns.
  • The firm stated that much of the use was not in accordance with its AI policy, which prohibits uploading client information and requires verifying the accuracy of AI-generated responses. Going forward, Hill Dickinson will only allow staff to use AI tools via a request process, with some requests already approved.
  • A representative from the Information Commissioner’s Office told BBC News that law firms should not discourage staff use of AI: “With AI offering people countless ways to work more efficiently and effectively, the answer cannot be for organisations to outlaw the use of AI and drive staff to use it under the radar”.

Deregulation, competition take centre stage at AI Action Summit

  • The AI Action Summit, a global conference held in Paris, brought together representatives from over one hundred nations to align their strategies on AI innovation and regulation.
  • While some leaders, particularly from the EU and the U.S., called for technology deregulation to facilitate AI growth, other nations took the opportunity to reaffirm their commitment to responsible AI governance and safety.
  • A key moment was the signing of the “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet,” endorsed by sixty attendees. The statement emphasized the development of AI that is “open, inclusive, transparent, ethical, safe, secure, and trustworthy,” and AI use in accordance with principles to promote competition and strong labour markets.
  • Notably, the U.K. and the U.S. were among the attendees that did not sign the statement. In a brief statement, the UK government said it had not been able to add its name to it because of concerns about national security and “global governance.”
  • The U.K. has long championed AI safety, with then Prime Minister Rishi Sunak hosting the first AI Safety Summit in November 2023. Andrew Dudfield, head of AI at fact-checking organisation ‘Full Fact,’ warned that the government’s decision to not sign the Paris agreement risks “undercutting its hard-won credibility as a world leader for safe, ethical and trustworthy AI innovation.”

Share:

More Posts

Send Us A Message