The development and potential role of Artificial Intelligence (AI) in society has become increasingly more of a discussion in many countries. More and more countries are beginning to consider how to best balance the benefits of AI development with the idea of data privacy and protection. This article will aim to summarise and compare how the European Union (EU), United Kingdom (UK), and United States (US) view the role of government in regulating and encouraging AI development.
The EU is arguably one of the global leaders in ensuring data privacy and protection through legislation. The EU is aiming to expand the protections already given through EU law in regards to AI development and its impact on individuals’ data privacy and protection. One of these expansions will be through the Artificial Intelligence (AI) Act which is currently going through the EU legislation process. The AI Act is likely to “be the world’s first comprehensive legislation governing [AI]” according to Reuters. It will include bans on facial recognition technology being used in public areas and the use of tools for predictive policing. It will also include systems for transparency on generative AI (such as ChatGPT). The AI Act will place different forms of AI technology into four different categories of risk: minimal, limited, high, and unacceptable. The amount of regulations which developers and users of different AI technologies must follow depends on its risk level – from simply meeting transparency obligations for minimal risk AI technologies to being prohibited with few exceptions for AI with unacceptable risk. Penalties for companies not following the AI Act or submitting misleading/false documentation could be up to 6% of global income or €30 million.
The EU is also investing heavily into its AI industry. According to the EU, “Through the Horizon Europe and Digital Europe programmes, the [EU] Commission plans to invest €1 billion per year in AI. It will mobilise additional investments from the private sector and the Member States in order to reach an annual investment volume of €20 billion over the course of the digital decade”.
Compared to the EU, the UK is taking a more legislative “hands-off” approach to AI development. The Department for Science, Innovation and Technology alongside the Office for Artificial Intelligence released a white paper last month entitled ‘AI regulation: a pro-innovation approach’. In the white paper, it details that the UK government will not prioritise creating new regulations in the near future. It will instead focus on creating guidelines for regulators while only taking statutory actions when required. There are already existing UK laws which can cover AI use such as the UK GDPR and Data Protection Act 2018. Instead of creating new AI specific legislation, the UK government “will instead focus on creating guidelines to empower regulators and will only take statutory action when necessary”.
The UK government’s investment into AI development is also on a large scale although less than the EU. Last March, the UK government announced that there has been “£1 billion of government funding pledged for the next generation of supercomputing an AI research”. This will involve the UK government awarding “a £1 million prize every year for the next 10 years for the best research into AI”.
It can be argued that the US’s planned legislative approach to AI development is coming from a weaker starting background than the UK or EU due to its lack of an equivalency to existing UK and EU data protection legislation. However, the US federal government is taking an active interest in potentially regulating future AI development. The Biden administration has released a non-legally enforceable ‘Blueprint for an AI Bill of Rights’ which details how “automated systems that have the potentially to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services” should be used, designed, and deployed. The Biden administration have also announced that the “Office of Management and Budget (OMB) is announcing that it will be releasing draft policy guidance on the use of AI systems by the U.S. government for public comment”.
The US federal government’s interest in potentially regulating future AI development is not limited to its executive branch. The CEO of OpenAI (the creators of ChatGPT) testified before a Senate panel this previous Tuesday regarding US efforts to regulate AI. Senate Majority Leader Schumer has announced that he is currently drafting a “legislative framework aimed at addressing the potential risks posed by AI while not curtailing innovation in the tech sector”. Last month, US Senator Michael Bennet “introduced the Assuring Safe, Secure, Ethical, and Stable Systems for AI (ASSESS AI) Act to review existing AI policies across the federal government and to make the U.S. government lead by example in the responsible use of AI”.
The US is also investing government money into its AI industry. According to the White House, the “National Science Foundation is announcing $140 million in funding to launch seven new National AI Research Institutes. This investment will bring the total number of Institutes to 25 across the country, and extend the network of organizations involved into nearly every state”. This amount may be lower than announced amounts from both the EU and UK but it does demonstrate that the US federal government views AI development as something to be encouraged.
The growing development of AI and its uses across various sectors appears to be solidifying its current and future role in both national and international economies. As a result, more and more countries are attempting to strike a balance between encouraging AI development while also protecting its citizens’ data rights. The examples of the EU, UK, and US demonstrates that this balance can be struck to various degrees: from the scale of potential AI regulation to the amount of relative funding given by governments to encourage AI development.