Artificial Intelligence: The Need to Update the Equality Act 2010

by | Mar 7, 2024

author profile picture

About Tanya Krupiy

Tanya is a lecturer in law at Newcastle University. She has expertise in international human rights law, international humanitarian law and international criminal law. She is particularly interested in exploring what societal impact new technologies create and how the norms of public international law can remain relevant in light of technological developments. Additionally, she researches how governments can bring domestic law in alignment with their international human rights law obligations. Tanya has expertise in artificial intelligence, robotic and digital technologies.

The Equality Act 2010 is not keeping up with the developments in Artificial Intelligence (AI). The United Kingdom government policy on AI suggests that the UK does not take this issue sufficiently seriously. In March 2023, the UK published a policy paper entitled “a pro-innovation approach to AI regulation.” This policy paper outlines how the government will approach developments in AI. The approach in the policy paper is problematic because it exacerbates the weaknesses of the Equality Act 2010 in the context of organisations using AI as part of decision-making processes.

A lack of government intervention to update the Equality Act 2010 will have serious consequences. The UK non-governmental organisation Public Law Project reported in October 2023 that UK public authorities are using 55 different AI-based decision-making tools. The operation of these AI tools shape crucial decisions, such as how government authorities distribute public funds among competing needs. As a result, these AI-based decisions affect everyone. It is of concern that the UK government is not taking any steps to identify how to update the Equality Act 2010.

Context: the UK policy paper

The problem with the current UK policy on AI is that it reflects a wait and see approach. According to the policy paper, the regulators will issue best practices to the industry about how to implement five principles [49]. One of these regulators is the Equality and Human Rights Commission [case study 3.5]. The principles, which address how organisations can develop and use AI in a responsible manner, are (i) safety, security and robustness; (ii) transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress [10].

The policy paper also recognises that although the regulators will have a duty to implement these principles, they will have discretion about how to implement them [12]. Additionally, the regulators can notify the government if they think that new legislation is needed [65]. This approach is problematic, as the Equality Act 2010 has some gaps that are apparent. It makes little sense for the Equality and Human Rights Commission to issue non-legally binding guidelines to organisations [11], to wait until it detects problems and only then to recommend to the government to take legislative action.

Unpacking the problem

Section 15 prohibiting disability-based discrimination and section 19 prohibiting indirect discrimination in the Equality Act 2010 allow organisations to invoke a justification that the treatment was a proportionate approach to achieve a legitimate aim. Section 15 prohibits persons from treating an individual with a disability “unfavourably” due to something “arising in consequence” of the disability. Section 19 defines indirect discrimination as using measures which put persons with a protected characteristic “at a particular disadvantage” when compared with persons who do not have that protected characteristic. An example of a justification could be meeting budgetary constraints [552].

The application of the principle of proportionality is problematic in the context of the use of AI in the decision-making process. Alex Engler explains that it would take “tremendous effort” to gather data to program AI which reflected the full diversity of disabilities. This lack of representativeness of data can result in AI issuing a negative decision to a person simply because AI lacks data about similar candidates. The terms qualifying the obligations of legitimate aim and proportionality in the Equality Act 2010 create a danger of a slippery slope of organisations arguing that they did not have the resources to program an AI decision-making system which incorporates data about all types of human diversity, such as disability. The Equality Act 2010 also opens up the possibility for companies to say that they lack the resources to have a human being carry out the screening process.

The proportionality justification is problematic because the AI has limited capability to engage with human diversity [2]. The better approach is to outlaw the use of AI decision-making processes in some contexts [21]. The policy paper exacerbates the weaknesses in the Equality Act 2010. It envisages the Equality and Human Rights Commission informing companies how to adopt proportionate measures to detect and to mitigate bias, but such measures can be non-existent in some contexts.

Share this:

Related Content

0 Comments

Submit a Comment