Algorithms and Education: A New Frontier of Discrimination?

by | Mar 15, 2021

author profile picture

About Santosh Carvalho

Santosh Carvalho is a graduate student pursuing the Bachelor of Civil Law at the University of Oxford. He has also worked in the technology industry for over 10 years.

Citations


Santosh Carvalho, “Algorithms and Education: A New Frontier of Discrimination?”, (OxHRH Blog, March 2021), <https://ohrh.law.ox.ac.uk/algorithms-and-education-a-new-frontier/>, [Date of access].

In this brief post, I want to demonstrate how ostensibly neutral and efficient algorithms can cause discrimination in education. Last year, the national advanced level qualifications (‘A-levels’) exams in the UK that lead to places in university, further study, training, or work had to be cancelled because of school closures owing to the COVID-19 pandemic. In mitigation, the Office of Qualifications and Examinations Regulation (‘Ofqual’) asked teachers to supply an estimated grade for each student and a ranking that compared with every other student at the school within the same estimated grade. This data went into an algorithm that also factored the school’s performance in the subject over the previous three years. The animating purpose behind the algorithm was to avoid ‘grade inflation’ and ensuring consistency with previous year’s results. When the grades were announced, the outcome was devastating for many. In England, Wales and Northern Ireland, nearly 40% of results were lower than teachers’ assessments. The effects of ‘downgraded’ results were disproportionately felt in comparatively poorly resourced state schools.

Discrimination Claim

The algorithm results were subject to an indirect discrimination claim against Ofqual by a student, supported by Foxglove(a technology-justice NGO), on the grounds of disability. S19 of the Equality Act 2010 precludes the application of facially neutral measures, like the algorithm, that puts persons sharing a protected characteristic like disability at a particular disadvantage in comparison with others who do not share the protected characteristic. There was prima facie evidence of differential impact towards students with disabilities.

They were at a particular disadvantage compared to their non-disabled peers for two reasons. First, teachers often relied solely on in-class performance. Given their lack of formality pre-pandemic, students with disabilities may not have received the necessary adjustments for in-school tests. For example, students with dyslexia may not have had a scribe available for all such tests. Therefore, a key input into the algorithm did not reflect their actual capability. Second, the Government implemented an appeals process that allowed students to rely on their mock exam results if it was higher than their algorithmically assigned grade. Students with disabilities who needed time-off during the year, for example, those with chronic illnesses like ME, may have poorer mock exam results. Another example might be those with caring responsibilities for others in their household who may not have been able to sit a mock exam at all.

Ofqual eventually abandoned the algorithm results and the litigation was not needed. Nevertheless, it is unlikely that the Government would have been able to defend this discrimination claim under s19(2)(d) of the Act. The Government did not adduce a legitimate justification in response to the pre-action letter by the Claimant and if it were to have one, it is unlikely that it would be accepted as a proportional means of achieving that aim. The consequential impact to students with disabilities, and indeed all other students whose results were lower than their teachers’ assessments, was stark. A-level results in the UK are the gateway to social mobility and wider opportunities in a stratified society.  The ‘downgrading’ of results would have disastrous life-altering consequences for these students.

Algorithms and Discrimination

The use of algorithms in public decision-making raises urgent questions for anti-discrimination law and equal justice more broadly. The A-levels case demonstrates that algorithms must not be uncritically accepted as the triumph of value-free ‘science’. Algorithms can be, and often are, reflective of the biases of the programmers that develop them. This may result in a design that simply mirrors the dominant norms of society, like the learning capacity of non-disabled students. Algorithms are also far more difficult to scrutinise than the decision-making procedure of a human. Whilst they are efficient at decision-making, they cannot substantiate considered reasons for individual cases unless they are designed for transparency and complemented with stringent human safeguards. Moreover, the algorithm itself may be the intellectual property of a private third party and thus keeping the workings or logic of it beyond the understanding of the Government department in question.

Algorithms are already a part of our daily lives, especially as consumers. The challenge remains to make them transparent and free from discriminatory bias.

Share this:

Related Content

0 Comments

Submit a Comment