#BreakThe(Algorithmic)Bias

The International Women’s Day official theme of this year’s campaign – #BreakTheBias – is a reminder of ongoing concerns surrounding algorithmic bias in the world of data privacy. Algorithmic bias is one of the most prominent new ethical and regulatory challenges created by AI technologies. But what are the steps required to manage these risks and increase the opportunities that better use of data offers to enhance fairness?

In all areas of digital technologies, there is concern that these technologies might reproduce and reinforce existing patterns of bias and discrimination on people based on their gender, race, disability, or other characteristics. These biases are caused by AI systems learning from data which may be unbalanced and/or reflect discrimination. As a result, the systems may produce outputs which have discriminatory effects.

The ICO lists (see: ‘Why might an AI system lead to discrimination?’) several potential causes of bias AI systems, including:

  • Imbalanced data training;
  • Training data which reflects past discrimination;
  • Prejudice or bias in the way variables are measured, labelled or aggregated;
  • Biased cultural assumptions of developers;
  • Inappropriately defined objectives;
  • The way the model is deployed.

As a result of increases in the complexity of algorithms, and the amount of available date to feed them, the risk of algorithmic bias is only becoming greater.

Ethically, this is concerning. This is reflected in the United Nations Educational, Scientific and Cultural Organization (UNESCO) global agreement on the Ethics of AI. Within this guidance, the UNESCO identify the potential for embedding bias as one of the most fundamental ethical concerns of AI technologies and recommend that ‘AI actors should make all reasonable efforts to minimise and avoid reinforcing or perpetuating discriminatory or biased applications and outcomes.’

However, actually addressing algorithmic bias can be a difficult task: there is no universal formulation or rule that tells you an algorithm is fair. Data protection legislation can offer guidance for what is and isn’t allowed. According to data protection legislation, any processing that leads to unjust discrimination between people will violate the fairness principle (that processing of personal data must be ‘fair). In addition, the UK Equality Act 2010 gives individuals the protection from direct and indirect discrimination, and applies to both human and automated decision-making systems. The challenge for organisations is how to build the skills and capacity to understand the bias and determine appropriate means of addressing it where found.

The Centre for Data Ethics and Innovation (CDEI) recently published a review which found that, while organisations are aware of the risks of bias, they are unsure how to address it in practice. Echoing these CDEI findings, The Lancet and Financial Times commission on Governing health futures 2030 found that health professionals are also not necessarily prepared or trained to respond to the concerns of algorithmic biases in digital health.

These findings are concerning, and organisations need to be clear about their own accountability for getting it right. In the recent judgment of the Court of Appeal, the court found that the South Wales Police did not meet their obligations under the Public Sector Equality Duty Act in relation to the usage of live facial recognition technology. Facial recognition technology has also been discussed more broadly in previous IGS Insights. One of the grounds for successful appeal was that they did not establish whether the algorithm contained biases. Moving forward, organisations will need to make sure that they have the right capabilities and structures to ensure the careful evaluation and mitigation of the potential risk of algorithmic bias.

While the CDEI foresees a future need to look again at the legislation addressing algorithmic bias, there is already plenty that AI actors can and should be doing. Data Protection Impact Assessments (DPIAs) and Equality Impact Assessments (EIAs) can help with systematically analysing, identifying and minimising bias and discrimination.

Finally, it is worth remembering the potential for algorithms to themselves break biases and themselves create a fairer society.

Share:

More Posts

Send Us A Message