The Interplay between Ethics and Law in AI and Data

In today’s rapidly evolving technological landscape, an understanding of the key ethical and legal principles in the realm of AI and data has become increasingly prominent. In this article, I delve into the complex relationship between ethics and law in the context of AI and data. That complex relationship is not merely about the claim that AI and data ethics is more than compliance. It is also marked by the impact of ethics on law, and the role of moral reasoning in legislative and judicial decisions. Understanding that relationship would help you adapt to the changing ethical and legal landscape of AI and data more quickly.

AI and Data Ethics: Beyond Compliance

What is lawful is not necessarily ethical. This should be a widely shared view, but for many people in the commercial world, AI and data ethics are just about compliance, or the necessary steps we should take to circumvent legal risks in future.

But this is not, and should not, be the case. Suppose there is a large clothing manufacturing company that operates in a developing country where labour regulations are relatively relaxed. The company employs workers in its factories under conditions that barely meet the legal minimum wage and safety standards set by the local government. The company’s employment practices may be deemed lawful since they comply with the minimum legal requirements set by the government of the country where it operates.

However, from an ethical standpoint, the company exploits vulnerable workers, prioritises profit maximisation over fair terms of employment, and more seriously, abuses human rights. In fact, the company is likely to be free from legal risks so long as it exploits workers in an environment with lax regulations.

Suppose further that this company uses surveillance cameras to monitor employees’ interactions, exploits AI tools to analyse all digital activities of the workers, and so on, to ensure that employees work as efficiently as possible. Because the manufacturing processes of the company take place in a country where AI and data regulations are relatively loose, it makes it even easier for the company to dodge legal responsibilities for its unethical applications of AI.

One might think that this example only applies to organisations that exploit the less developed AI and regulations of certain countries. After all, we already have much more comprehensive regulatory frameworks for AI or data that oversee the domestic AI and data practices of individuals, so we might assume this makes unethical AI and data practices relatively rare on a domestic scale.

However, this assumption may be mistaken. Consider another example facing many of us: we are often frustrated by how difficult it is to cancel a subscription. Suppose a website utilises AI tools to make it very easy and straightforward for users to sign up for a subscription. However, it makes the process to cancel that subscription significantly more complicated, requiring navigating through multiple pages, filling out forms, or even contacting customer service directly. This could exploit some people who find the process more difficult to follow and understand; or, depending on the nature of the subscription, how much it costs and so on, some people might just give up out of frustration or how long it takes, delaying to another day but then continuing to have a subscription they no longer want. Such ‘dark patterns’ are not always illegal, but they still exist in many online platforms today and have received considerable ethical criticisms.

These examples point to one thing: we should not reduce AI and data ethics to compliance. Thinking so does not simply blind us from the many ethical risks of AI and data beyond legality, but it also reinforces the dodgy idea that the only thing that matters is to use AI and data to serve our goals, whether they are more fundamentally legitimate or not, in a legal way.

The Impact of Ethics on Law

Our shared ethical standards have an impact on the content of law. Laws are not created from nowhere; rather, they are deeply influenced by the prevailing moral and ethical beliefs of a society.

A prominent example from the UK that illustrates this relationship is the enactment of the Modern Slavery Act 2015. The Modern Slavery Act 2015 was a landmark piece of legislation in the UK aimed at combating modern slavery and human trafficking. Before its enactment, there was a growing ethical and public concern over modern slavery practices within the UK and globally. In response to these ethical considerations and public pressure, the UK government introduced the Modern Slavery Act, which ensures that perpetrators can receive suitably severe punishments, and enhances support and protection for victims. It also introduced measures that require businesses to disclose what actions they have taken to ensure their supply chains are free from slavery.[1]

This legislation demonstrates how shared ethical standards—specifically, the belief in human rights and the protection of human dignity—can lead to the creation of laws that address societal issues and reflect contemporary moral values.

Things are not different in the world of AI and data. Many AI and data regulations familiar to us today are driven by a paradigmatic shift in our shared social practices and ethical standards. The GDPR, for example, was influenced by a series of historical developments, technological advancements, and societal shifts regarding privacy and data protection. The rapid advancement of digital technology, the explosion of internet usage, and the advent of social media platforms led to an unprecedented increase in the collection, storage, and processing of personal data, whereas all these have caused widespread ethical concerns over the ways in which technological giants exploit our personal data. There were also high-profile data breaches and surveillance revelations (e.g. those by Edward Snowden in 2013) over the last decade, which heightened public awareness of privacy.

Anu Bradford, in her Digital Empires, offers a similar insight. She finds, for instance, that the American and European models of AI and data governance are shaped considerably by the shared ethical priorities of their people. The American model, she argues, is centred on

protecting free speech, a free internet, and incentives to innovate. It is shaped by discernible techno-optimism, relentless pursuit of innovation, and uncompromised faith in markets as opposed to government regulation. Under this worldview, the internet is viewed as a source of economic prosperity and political freedom and as a tool for societal transformation and progress.[2]

This model, according to Bradford, can be traced back to the technological ideals in California, where many leading tech giants (e.g. Apple, Google, Meta) have been cultivated. Many people there share the belief that technology has an emancipatory potential to solve the major problems facing human beings. Therefore, the American model upholds deregulations in the realm of AI and data, and seeks to establish an environment for tech company to flourish.

In contrast, the European model is rights-driven. It

views governments as having a central role in both steering the digital economy and in using regulatory intervention to uphold the fundamental rights of individuals, preserve the democratic structures of society, and ensure a fair distribution of benefits in the digital economy.[3]

This model results from the public culture of Europe, which ‘identifies democracy, fairness, and fundamental rights as key values guiding EU policymaking’[4].

So, with all of this in mind, why is it important for your organisation to appreciate the impact of ethics on law? The answer is because doing so  better prepares you for meeting future legal requirements. For example, companies that were early to adopt data protection and privacy practices in line with the UK GDPR principles often found themselves ahead of the curve when the regulation came into effect. Organisations which participate actively in the conversations, programmes and training of AI and data ethics will likewise prepare them for what is coming.

Law and Moral Reasoning

Decisions on the content and applications of law are supported by moral reasoning. Moral reasoning, in other words, is an integral part of judicial and legislative decisions.

One notable example of how moral reasoning affects legislative decisions is the attempts to legalise same-sex marriage in various countries, including the UK. Supporters of same-sex marriage often argue from a moral perspective that denying same-sex couples the right to marry is discriminatory and violates principles of equality, dignity, and basic human rights. It is often contended that all individuals should have the same legal recognition and protection for their relationships, regardless of sexual orientation or gender identity. The ethical values of love, commitment, family unity, fairness and inclusion are also frequently cited by the supporting arguments of same-sex marriage in the legal sphere.

Moreover, Joseph Raz—one of the most influential legal, moral and political philosophers in this century— also said that

[judges’] decisions, all their decisions, are based on considerations of political morality…Their decisions are moral decisions in expressing a moral position. A conscientious judge actually believes in the existence of a valid doctrine, a political morality, which supports his action.[5]

Political morality is a crucial part of ethics: it refers to whatever ethical principles and values that guide the behaviour and decision-making of individuals and institutions within the realm of politics. It encompasses the moral standards that govern the actions of political leaders, governments, political parties, and citizens in their interactions with one another and with society as a whole.

In the sphere of data protection, the Investigatory Powers Act (IPA) 2016 will be a good example to illustrate the intimate relationship between moral reasoning and law.

The IPA grants UK intelligence agencies and law enforcement authorities extensive powers to conduct surveillance and gather communications data to combat terrorism, serious crime, and other national security threats. It allows for the bulk collection of internet browsing histories, phone records, and other forms of communications data, as well as the interception of communications through various means, including hacking and the use of interception warrants. However, the IPA also requires that any surveillance activity authorised under the legislation must be necessary and proportionate to the objective pursued. This means that surveillance measures should only be used when they are essential for achieving legitimate national security, law enforcement, or other specified purposes.

In addition, the IPA includes provisions requiring transparency reporting by public authorities engaged in surveillance activities. These authorities are required to publish annual transparency reports disclosing certain information about their use of surveillance powers, including the number of surveillance warrants issued, the types of surveillance conducted, and the purposes for which surveillance was authorised.[6] Clearly, the Act is underpinned by a view of what matters morally. It upholds (1) national security, protection of the nation from harm, (2) public safety, the prevention of terrorist attacks and disruptive criminal activities by malicious actors, (3) proportionality, the idea that surveillance measures should be proportionate to the threats facing our society.

Another example is the Freedom of Information Act 2000 (FOIA). The FOIA establishes a general right for individuals to access information held by public authorities. Any person, regardless of nationality or residency, can make a request for information under the act. Public authorities are obligated to respond to information requests within specific timeframes and provide the requested information, unless an exemption applies.[7] The FOIA reflects a moral imperative to empower citizens and promote active participation in democratic society. By providing individuals with the right to access information about government policies, decisions, and activities, the act seeks to facilitate informed public debate, foster civic engagement, and empower citizens to exercise their rights and responsibilities as members of society. Access to information is seen as essential for individuals to make informed choices, hold public officials to account, and contribute to the democratic process. The FOIA, in short, is very much inspired by democratic values.

Awareness of the moral reasoning behind legal decisions allows businesses to anticipate and mitigate legal risks more effectively. By understanding the ethical considerations shaping the law, businesses can identify potential legal challenges, assess the implications of their actions, and implement strategies to minimise legal exposure and liability.

Conclusion

In conclusion, the discourse surrounding AI and data ethics transcends mere legal compliance, encompassing broader ethical considerations that underpin responsible technological development and deployment.

While legal frameworks provide important guardrails for regulating AI and data practices, they are not exhaustive in addressing the ethical complexities inherent in these domains. By recognising the impact of ethics on law and understanding the moral reasoning behind legal decisions, businesses can navigate the evolving landscape of AI and data with greater foresight and ethical awareness.

IGS offers a series of training modules in AI and data ethics that take seriously the interplay between ethics and law. If you are seeking to enhance the AI and data ethical literacy of your members, we will be glad to discuss with you what training options we might offer.


[1] UK Government (2015). Modern Slavery Act 2015. [online] www.legislation.gov.uk. Available at: https://www.legislation.gov.uk/ukpga/2015/30/contents/enacted.

[2] Bradford, A. (2023). Digital Empires. Oxford University Press, p.38.

[3] Ibid., p.115.

[4] Ibid.

[5] Raz attributes this view to Ronald Dworkin, with whom he disagrees, but Raz makes it explicit that he ‘fully share[s] it’. See Raz, J. (1995). Authority, Law, and Morality. Ethics in the Public Domain: Essays in the Morality of Law and Politics, [online] pp.210–237. doi:https://doi.org/10.1093/acprof:oso/9780198260691.003.0010.

[6] UK Government (2016). Investigatory Powers Act 2016. [online] www.legislation.gov.uk. Available at: https://www.legislation.gov.uk/ukpga/2016/25/contents/enacted.

[7] UK Government (2000). Freedom of Information Act 2000. [online] Legislation.gov.uk. Available at: https://www.legislation.gov.uk/ukpga/2000/36/contents.

Share:

More Posts

Send Us A Message