Equality, Diversity and Inclusion (EDI) in AI and Data

People talk about equality, diversity and inclusion (EDI) everywhere. Corporations nowadays, if they do not demonstrate a commitment to EDI, are likely to be fiercely criticised and have their reputation damaged. In recent years, EDI has also become a key agenda for almost all AI and data enthusiasts. But what exactly do EDI require? What exactly should your organisation do to take EDI seriously? More specifically, in the age of AI and data, what does it take for your AI and data practices to uphold EDI?

The aim of this article is to give partial answers to these questions. I say ‘partial’ because EDI is not an end-point that will be achieved once your organisation puts some policies in place. Instead, we should see EDI as a set of moral ideals, to be achieved by actively reflecting on and challenging our existing social, economic and political order. In this article, I will discuss

  • the key factors inspiring contemporary EDI conversations;
  • the reasons to care about EDI in the workplace, and in society in general; and
  • EDI issues related to AI and data practices, and how we might mitigate them.

Equality, Diversity and Inclusion: History and Meaning

Today, equality, diversity and inclusion have become umbrella terms associated with a wide range of social, political and economics injustices, especially in the Western context. To begin, it is important to understand what historical factors lead to EDI discussions nowadays.

Social and political movements over the last century have prompted EDI-related regulations. In the mid-20th century, for instance, there were significant civil rights movements worldwide, particularly in the US, pushing for equal rights and opportunities for all, regardless of race and ethnicity. The feminist movement, which gained significant momentum in the 20th century, also brought attention to gender disparities and their related injustices. These, in conjunction with the many labour movements for equitable work conditions that have emerged across centuries, as well as issues of cultural diversity raised by increasing global migration, have all contributed to the significance we attach to equality, diversity and inclusion in modern times.

These movements have also accelerated a range of EDI-driven legislative actions, such as the Civil Rights Act of 1964 in the US, the UK Race Relations Act 1976, and so on. Such legal change has had a considerable impact on the workplace. For instance,

Workplace diversity training first emerged in the mid-1960s following the introduction of equal employment laws and affirmative action…These new laws prompted companies to start diversity training programs that would help employees adjust to working in more integrated offices’[1].

Ethicists disagree a lot about what equality, diversity and inclusion mean, and what they require in different contexts. But almost all of us converge on the view that some disadvantaged groups, or some forms of inequality, call for corrective actions. Here are some inequalities/injustices that typically underpin EDI measures:

  • Ethnic/Racial Inequality: Ethnic and racial minorities often face barriers to (1) employment and opportunities for promotion, (2) educational resources and attainments, (3) healthcare services, (4) fair representation in the criminal justice system, (5) housing, and (6) representation in mainstream media. This has led to the underrepresentation and stigmatisation of ethnic/racial minorities.

  • Educational Inequality: There are disparities among different groups in their access to and quality of educational resources/attainment. Such disparities can result from economic, gender and racial inequalities, or other demographic factors. Educational inequality is particularly relevant to EDI in the workplace, as it is an important factor determining one’s career prospects.

  • Wealth Inequality: Wealth has a considerable impact on one’s access to a range of social resources (e.g. educational resources, housing). In particular, intergenerational transfers of wealth play a significant role in perpetuating wealth inequality. To mitigate the impact of such perpetuating inequality, for instance, many privileged institutions have introduced recruitment quotas for economically disadvantaged individuals, so that, over the long run, there will be more representation of people from less financially privileged backgrounds.

  • Gender Inequality: Gender inequality often manifests itself in (1) pay gap, (2) occupational segregation, in which men are overrepresented in higher-paying fields and positions, (3) underrepresentation of women in leadership positions within politics, business, academia, and other sectors, (4) the disproportionate burden of care work between men and women, such as childcare and household chores, and so on. Our current socio-economic system is also unfavourable to transgender individuals, or gender-nonconforming people.

  • Historical Injustice: Marginalised groups are disadvantaged partly because of historical injustices. These include, for example, slavery, colonisation and exploitation of indigenous peoples, genocide, segregation laws, forced assimilation policies and systemic discrimination.

This list of inequalities/injustices is certainly not exhaustive. Our collective imagination of EDI is heavily shaped by the new challenges facing our society. The far-reaching impact of AI and data is one of those challenges.

These backgrounds of modern EDI discussions, moreover, ought to be taken seriously, even if you are looking for EDI advice specific to AI and data practices. As we shall see later, if the majority of AI and data specialists belong to organisations do not have adequate diversity, the inequalities or injustices above will be amplified by the new AI/data systems.  

Why Care about EDI?

Why should your organisation (and perhaps everyone) care about EDI? The most influential political philosopher in the 20th century, John Rawls, famously maintains that our place in society is entirely contingent, and that many of us are socially privileged because we are lucky enough to have the natural endowments and social conditions that yield success.[2] Let us call this ‘the fact of contingencies’. When we think about what social and political system we should have, we should see each other as moral equals, by taking this fact of social contingencies seriously. Similarly, marginalised groups in society are often disadvantaged by contingent factors outside of their control. Failing to observe how lucky we are, and excluding the less fortunate individuals in arranging our everyday institutions (e.g. schools, workplace, government), is a failure to respect moral equality. EDI initiatives, in a sense, are focused on addressing such unfair contingencies.

But there are economic benefits of EDI initiatives as well. For instance, it has been found that

  • ‘companies in the top quartile for ethnic and cultural diversity outperform those in the fourth by 36% in profitability… diverse companies report 19% higher innovation revenues’[3].
  • When employees feel that ‘their organisation is committed to and supportive of diversity…their ability to innovate increases by 83%’[4].
  • Diversity has positive impact on the quality of deliberation processes.[5]
  • ‘76% of job seekers and employees believe that a diverse workforce is an important factor when evaluating job offers, and nearly a third (32%) would not apply to a company that lacks diversity’[6].

These findings suggest at least that EDI, when promoted in the right way, can be expected to enhance the performance, morale and attractiveness of your organisation.

EDI Issues in AI and Data, and How to Address Them

The far-reaching impact of AI and data has had an unprecedented impact on our society. Here are some familiar EDI issues in AI and data, and possible ways to address them.

Algorithmic Biases

AI systems learn from data. If the training data for AI systems is biased or unrepresentative, the systems can perpetuate or even amplify existing biases. IBM has given some relevant examples:[7]

  • Because of the underrepresentation of data of minority groups, ‘computer-aided diagnosis (CAD) systems have been found to return lower accuracy results for black patients than white patients’.
  • Applicant tracking systems produce biased results. For example, ‘Amazon stopped using a hiring algorithm after finding it favored applicants based on words like “executed” or “captured,” which were more commonly found on men’s resumes’.
  • The AI art generation application Midjourney, when ‘asked to create images of people in specialized professions, it showed both younger and older people, but the older people were always men, reinforcing gendered bias of the role of women in the workplace’.

Algorithmic biases have also been a key issue in online advertising and predictive policing. These biases can be addressed by diverse and representative data collection, bias mitigation algorithms, regular monitoring and auditing, and so on.

Lack of Diversity in AI Development

The current system of AI experts and finance itself falls short of diversity. For instance, New York University has found that ‘More than 80% of AI professors are men, and only 15% of AI researchers at Facebook and 10% of AI researchers at Google are women’[8]. There are also geographical disparities in AI development. According to the World Economic Forum,

an Oxford Insights assessment of 181 countries around the world and their preparedness in using AI in public services highlights that the lowest-scoring regions include much of the Global South, such sub-Saharan Africa, some Central and South Asian countries, and some Latin American countries.[9]

Nowadays, the development of AI systems often involves advice from AI ethicists. What is concerning, however, is that even the field of AI ethics research lacks diversity, whereas AI fairness is a good example. According to Nature,

Scientists analysed 375 research and review articles on the fairness of artificial intelligence in health care, published in 296 journals between 1991 and 2022. Of 1,984 authors, 64% were white, whereas 27% were Asian, 5% were Black and 4% were Hispanic (see ‘Gaps in representation’). The analysis…also found that 60% of authors were male and 40% female, a gender gap that was heightened among last authors, who often have a senior role in leading the research.[10]

The direct result of a lack of diversity in the AI workforce is that AI development will be impeded by limited perspectives, and this can potentially reinforce existing algorithmic biases, since the relevant system designers/evaluators are biased already. Enhancing the diversity of your workforce, therefore, will allow the AI systems of your organisation, if any, to benefit from a wider range of perspectives, and put your organisation in a better position to identify the potential biases of those systems.

Accessibility to AI

Utilising AI systems requires a certain level of digital literacy, and not all AI systems are user-friendly for all. For example, when your organisation endorses new AI systems, some of your employees might find it difficult to master those systems and thus become isolated. They might have backgrounds that limit their digital literacy, internet access and technological capabilities. Another typical example would be voice-based AI assistants, which fail to recognise non-standard speech patterns or accents. This might exclude those with speech impairments or non-native speakers from using such assistive technologies.

The key takeaway here, therefore, is that your organisation ought to put effective measures in place whenever you introduce new AI or data systems to your employees, to ensure their equal mastery of the systems whatever their levels of digital literacy.  

Given the technical nature of the AI/data systems of different organisations, as well as the complexity of how those systems interact with AI/data regulations, it will be best for your organisation to seek external advisors (e.g. IGS) to consider what EDI measures are suitable for your particular circumstances.


Similar to many other EDI issues, there are no perfect solutions to EDI problems in the domain of AI and data. For example, in the past many of us thought that once we digitalise our hiring processes, they will become fairer and less biased. However, as it turns out, algorithms themselves can be biased, and AI and data developers/ethicists are still working hard to address them. This also suggests why, as I said at the beginning, EDI is not an endpoint that can be achieved once we put certain measures in place. Organisations need standing practices and individuals to evaluate whether and how their procedures and systems marginalise certain groups. Most importantly, your organisation should seek to develop a culture in which everyone actively familiarises themselves with EDI-related issues, and the implications of these for AI and data processes.

To help you navigate EDI challenges in AI and data, IGS has developed a range of training modules for your organisation. You are welcome to contact our AI and data governance specialists for an initial inquiry about such training opportunities.

[1] Dong, S. (2021). The History and Growth of the Diversity, Equity, and Inclusion Profession. [online] Global Research and Consulting Group Insights. Available at: https://insights.grcglobalgroup.com/the-history-and-growth-of-the-diversity-equity-and-inclusion-profession/.

[2] Rawls, J. (1971). A Theory of Justice. Harvard University Press.

[3] Rizvi, J. (2023). How AI Can Be Leveraged For Diversity And Inclusion. [online] Forbes. Available at: https://www.forbes.com/sites/jiawertz/2023/11/19/how-ai-can-be-leveraged-for-diversity-and-inclusion/?sh=6fe168194ee9 [Accessed 20 Feb. 2024].

[4] Guerra, S. (2020). Invest in Inclusion: The Business Case for EDI – Diversity Digest. [online] Diversity Digest. Available at: https://blogs.kcl.ac.uk/diversity/2020/11/02/invest-in-inclusion-the-business-case-for-edi/.

[5] Bergold, A.N. and Bull Kovera, M. (2021). Diversity’s Impact on the Quality of Deliberations. Personality and Social Psychology Bulletin, p.014616722110409. doi:https://doi.org/10.1177/01461672211040960.

[6] Chen, J. (2022). Here’s how to tailor employee benefits to a diverse workforce. [online] World Economic Forum. Available at: https://www.weforum.org/agenda/2022/09/employee-benefits-diversity/.

[7] IBM Data and AI Team (2023). Shedding light on AI bias with real world examples. [online] IBM Blog. Available at: https://www.ibm.com/blog/shedding-light-on-ai-bias-with-real-world-examples/.

[8] Paul, K. (2019). ‘Disastrous’ Lack of Diversity in AI Industry Perpetuates bias, Study Finds. [online] The Guardian. Available at: https://www.theguardian.com/technology/2019/apr/16/artificial-intelligence-lack-diversity-new-york-university-study.

[9] Yu, D., Rosenfeld, H. and Gupta, A. (2023). The ‘AI divide’ between the Global North and Global South. [online] World Economic Forum. Available at: https://www.weforum.org/agenda/2023/01/davos23-ai-divide-global-north-global-south/.

[10] Wong, C. (2023). AI ‘fairness’ research held back by lack of diversity. Nature. doi:https://doi.org/10.1038/d41586-023-00935-z.


More Posts

Send Us A Message