In the last World Economic Forum Annual Meeting, it was concluded that ‘misinformation and disinformation is the most severe short-term risk the world faces. AI is amplifying manipulated and distorted information that could destabilize societies’[1]. Many organisations have called for better regulation of AI misinformation/disinformation, although it is not yet clear how this should be done.
It is important to draw the distinction between misinformation and disinformation, although they both point to information that is misleading and based on factual inaccuracies. According to the American Psychological Association,
Misinformation is false or inaccurate information—getting the facts wrong. Disinformation is false information which is deliberately intended to mislead—intentionally misstating the facts.[2]
In short, misinformation refers to any information that is false or inaccurate, whereas disinformation is a specific kind of misinformation that is intentionally produced. The former is not necessarily intentional. For example, if someone genuinely believes in the false information generated by AI and uses it, it should be classified as ‘misinformation’ rather than ‘disinformation’.
In both ethics and law, intention makes a difference to the wrongfulness of someone’s act. It is often assumed that someone’s deliberately doing something harmful is morally worse than someone who unintentionally does something similarly harmful, other things being equal. This article accepts that assumption and holds that the intentional nature of disinformation makes it more problematic than misinformation. Misinformation and disinformation by AI, therefore, require different regulatory strategies.
This article is divided into three sections. First, I consider the standard arguments for restricting AI-generated misinformation. Second, I explore two common objections to regulating AI misinformation, and defend some principles that should guide the design of regulations applicable to AI misinformation/disinformation. I conclude the article in the final section.
The Flood of AI Misinformation
Here are two simple reasons why AI has led to the explosion of misinformation:
- It makes false information look realistic.
- It is not costly, in terms of time and money, to generate AI misinformation by AI.
There are several reasons why AI misinformation looks realistic. Many AI models (e.g. ChatGPT) are trained on vast amounts of data from the internet, whereas such data includes a number of writing styles, topics and viewpoints. It is therefore easy for AI to mimic the patterns and structures found in real text, making it increasingly difficult to distinguish AI misinformation from authentic information. Moreover, many generative AI models are designed to understand and generate text in a contextual way, enabling them to grasp the fluidity of language and thus create sentences that look cogently written by humans.
The ability of generative AI models to produce information with a contextual awareness also allows them to generate information that coheres with a particular narrative, agenda or theme. Thus it is common for generative AI to combine real facts with misleading details, making it very hard to spot the questionable bits of what AI produces. However, AI misinformation certainly does not take only the form of text. AI can produce extremely realistic images, videos and audio that are essentially fake.
With little to no cost, false information can be produced by AI in seconds. Here are some examples of deliberate production of misinformation by AI, in the run up to the 2024 US presidential election:
- ‘AI-manipulated videos and images of political leaders have made the rounds on social media. Examples include a video that depicted President Biden making transphobic comments and an image of Donald Trump hugging Anthony Fauci’[3].
- There are AI-generated images showing ‘Taiwan under attack; boarded up storefronts in the United States as the economy crumbles; soldiers and armored military vehicles patrolling local streets as tattooed criminals and waves of immigrants create panic [in the US]’[4].
- ‘Personalized, real-time chatbots could share conspiracy theories in increasingly credible and persuasive ways…[that smooth] out human errors like poor syntax and mistranslations’[5].
- ‘AI audio parodies of US presidents playing video games became a viral trend. AI-generated images that appeared to show Donald Trump fighting off police officers trying to arrest him circulated widely on social media platforms’[6].
Why Regulate AI Misinformation?
Many arguments for regulating misinformation/disinformation are based on consequentialist grounds, focusing on its negative impact on the following, for instance:
- Public Safety and Health: Misinformation can threaten public safety. For example, misinformation can bring about public health crises, such as the flood of misinformation about COVID vaccines during the global pandemic.
- Democratic Quality: Misinformation has an impact on citizens’ political opinions. When citizens’ political opinions are based on inaccurate information, that makes it increasingly difficult for them to (1) reconcile their views with each other, and (2) make sound and fact-based political judgments, thereby fuelling civic hostility and political ignorance.
- Social Stability: Misinformation is potentially a source of public fear, panic, and conflicts. It can fuel racism, violence and so on.
- National Security: It is often argued that malicious actors, especially those from foreign countries, can manipulate domestic public opinions and provoke unrest, by producing misinformation. In a democratic context, this is a threat to citizens’ capacity for self-determination.
- Distrust in Institutions: Misinformation can arbitrarily erode trust in any institution, including the media, scientific organisations as government. Even when an institution is in fact a reliable source of information, the flood of misinformation will undermine its credibility.
- Accountability: Regulating misinformation can help create a public culture in which individuals and organisations take more seriously the information they use.
Currently, there is no legislation in the UK that directly regulates misinformation, but there are laws that regulate some forms of misinformation:
- The Malicious Communications Act 1988: The Act applies to those ‘who send or deliver letters or other articles for the purpose of causing distress or anxiety’[7]. In other words, disinformation aiming to cause distress or anxiety is unlawful.
- Defamation Act 2013: To strike a fair balance between ‘the right to freedom of expression and the protection of reputation’[8], the Act regulates the publication of statement(s) that ‘has caused or is likely to cause serious harm to the reputation’[9] of someone. The Act, therefore, applies only to misinformation containing defamation.
- The Communications Act 2003: The act applies to the sending of messages ‘by means of a public electronic communications network’[10] that are grossly offensive, indecent, obscene, menacing, or purposefully cause annoyance, inconvenience or anxiety. Spreading misinformation/disinformation falling under such categories is unlawful.
- Online Safety Act 2023: The Act imposes duties on providers of online services to ‘identify, mitigate and manage the risks of harm…from (i) illegal content and activity, and (ii) content and activity that is harmful to children’[11]. Online platforms, therefore, should monitor misinformation which has such harmful effects.
In short, currently there are two strategies for regulating misinformation. The first strategy is to hold individuals accountable to the misinformation/disinformation they produce or use, to the extent that this has socially harmful consequences. The second strategy is to incentivise online platform providers to take control of the spread of harmful misinformation/disinformation, whereas they will be held responsible if they fail to exercise that control adequately.
Why Not Regulate AI Misinformation?
Generative AI (i.e. algorithms that can generate content, such as texts, videos, images and audio) is often accused of its potential to produce misleading content. Unlike misinformation that is not generated by AI, however, regulating AI misinformation involves more complex considerations.
One common argument against regulating AI misinformation is that it offends free speech. This argument, however, oversimplifies the picture. It is widely accepted that, while free speech ought to be a key right of citizens, it should always be weighed against other rights. If, for instance, some innocent people will be killed by someone’s freely exercising her freedom to spread racial hatred, then it would be very much uncontroversial to put at least temporary limitations of the free speech of the latter. There is no reason to suppose that free speech, in the domain of AI, is a non-negotiable constraint that should be honoured at all costs.
Another common objection to regulating AI misinformation is that regulations inevitably slow down AI advancements. For example, for the developers of generative AI, restrictions of AI misinformation might make them cautious about what data to use for training machines. They might be subject a wide range of legal rules in developing AI, making them less competitive domestically and internationally, since AI developers in restrictions-free countries are likely to have a head start.
This objection is legitimate, but again we will face the dilemma of weighing AI progress against the rights of individuals vulnerable to AI misinformation/disinformation. However, the following three principles, in my view, are better placed to address our interest in stifling AI development, without compromising too much our concern for social groups vulnerable to AI misinformation.
First, people should be held legally responsible if they (a) deliberately use AI to produce misinformation that has harmful social outcomes, or (b) develop AI systems purposefully designed for generating misinformation to create such outcomes. In short, AI disinformation AI should be unlawful. But this principle has more to do with the intention of those who use AI misinformation, and those who develop misinformation-producing AI tools, for socially harmful purposes.
What about cases in which someone does not produce/use AI misinformation intentionally, but nevertheless brings about socially harmful consequences? My suggestion is that while she should be held morally responsible for contributing to those consequences due to her ignorance, she should not be punished for being ignorant.
Second, AI developers should establish transparent and effective procedures to correct their systems’ tendencies to generate misinformation. It is acceptable for AI systems to make mistakes, so long as their developers have endeavoured to establish processes that predict and correct such mistakes. This also implies that AI developers ought not to be punished for the inaccuracy of the information generated by their systems. But they should be held accountable for their failing to put effective monitoring procedures in place.
Third, there should be long-term, if not permanent, measures for corporations and the state to enhance the digital literacy of citizens, so that citizens are aware of how AI can be misused to generate misinformation, and ways to identify it.
These three principles should guide the attempts to regulate AI misinformation/disinformation in the near future. On the one hand, the first and second principles provide tech companies with considerable space for innovations, but they also leave enough adequate room for prosecuting attempts to create AI misinformation for harmful purposes. AI developers should not be punished for their failure to produce reliable systems, as the caution required for this makes it more difficult for them to focus on innovations. But they do have a duty to establish transparent and effective procedures to identify and correct the errors of the systems. On the other hand, the more successful we are in enhancing citizens’ digital literacy, the less necessary it is for us to control AI misinformation by regulatory means.
Conclusion
Regulating AI misinformation is not an easy question about the trade-offs between free speech and the harmful effects of misinformation; how the prospective AI regulations will affect AI development and its positive impact on humanity must also be considered. The new EU AI Act is going to have considerable impact on the ethical and legal landscape of AI misinformation, and IGS is willing to help your organisation to navigate the relevant changes.
[1] Torkington, Simon. 2024. “The World Is Changing and so Are the Challenges It Faces.” World Economic Forum. January 13, 2024. https://www.weforum.org/agenda/2024/01/ai-disinformation-global-risks/.
[2] American Psychological Association. 2022. “Misinformation and Disinformation.” American Psychological Association. 2022. https://www.apa.org/topics/journalism-facts/misinformation-disinformation.
[3] Ryan-Mosley, Tate. 2023. “How Generative AI Is Boosting the Spread of Disinformation and Propaganda.” MIT Technology Review. October 4, 2023. https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/.
[4] Klepper, David, and Ali Swenson. 2023. “AI-Generated Disinformation Poses Threat of Misleading Voters in 2024 Election.” PBS NewsHour. May 14, 2023. https://www.pbs.org/newshour/politics/ai-generated-disinformation-poses-threat-of-misleading-voters-in-2024-election.
[5] Johnson, Douglas, Rachel Goodman, J Patrinely, Cosby Stone, Eli Zimmerman, Rebecca Donald, Sam Chang, et al. 2023. “Assessing the Accuracy and Reliability of AI-Generated Medical Responses: An Evaluation of the Chat-GPT Model.” Assessing the Accuracy and Reliability of AI-Generated Medical Responses: An Evaluation of the Chat-GPT Model, February. https://doi.org/10.21203/rs.3.rs-2566942/v1.
[6] Robins-Early, Nick. 2023. “Disinformation Reimagined: How AI Could Erode Democracy in the 2024 US Elections.” The Guardian, July 19, 2023, sec. US news. https://www.theguardian.com/us-news/2023/jul/19/ai-generated-disinformation-us-elections.
[7] “Malicious Communications Act 1988.” 2015. Legislation.gov.uk. 2015. https://www.legislation.gov.uk/ukpga/1988/27.
[8] UK Parliament. 2021. “Defamation Act – Parliamentary Bills – UK Parliament.” Bills.parliament.uk. March 18, 2021. https://bills.parliament.uk/bills/983.
[9] “Defamation Act 2013.” 2013. Legislation.gov.uk. 2013. https://www.legislation.gov.uk/ukpga/2013/26.
[10] “Public Prosecution Service for Northern Ireland Guidelines for Prosecuting Cases Involving Electronic Communications Draft for Consultation (November 2021) Independent, Fair and Effective.” n.d. https://consultations.nidirect.gov.uk/doj/pps-guidelines-for-prosecuting-cases-involving-ele/user_uploads/guidelines-for-prosecuting-cases-involving-electronic-communications-1.pdf.
[11] GOV.UK. 2023. “Online Safety Act 2023.” Legislation.gov.uk. 2023. https://www.legislation.gov.uk/ukpga/2023/50/enacted.