Discrimination in the Age of Artificial Intelligence

by | Oct 23, 2018

author profile picture

About Arindrajit Basu

Arindrajit Basu holds an LLM (Public International Law) from the University of Cambridge and works as a Policy Officer at The Centre for Internet & Society, India

Citations


Arindrajit Basu, “Discrimination in the age of Artificial Intelligence” (OxHRH Blog, 23 October 2018), <http://discrimination-in-the-age-of-artificial-intelligence> [date of access].

The dawn of Artificial Intelligence (AI) has been celebrated by both government and industry across the globe. AI offers the potential to augment many existing bureaucratic processes and improve human capacity, if implemented in accordance with principles of the rule of law and international human rights norms. Unfortunately, AI-powered solutions have often been implemented in ways that have resulted  in the automation, rather than mitigation, of existing societal inequalities.

In the international human rights law context, AI solutions pose a threat to norms which prohibit discrimination. International Human Rights Law recognizes that discrimination may take place in two possible ways, directly or indirectly. Direct discrimination occurs when an individual is treated less favourably than someone else similarly situated on one of the grounds prohibited in international law, which, as per the Human Rights Committee, includes race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. Indirect discrimination occurs when a policy, rule or requirement is ‘outwardly neutral’ but has a disproportionate impact on certain groups that are meant to be protected by one of the prohibited grounds of discrimination. A clear example of indirect discrimination recognized by the European Court of Human Rights arose in the case of DH&Ors v Czech Republic. The ECtHR struck down an apparently neutral set of statutory rules, which implemented a set of tests designed to evaluate the intellectual capability of children but which resulted in an excessively high proportion of minority Roma children scoring poorly and consequently being sent to special schools, possibly because the tests were blind to cultural and linguistic differences. This case acts as a useful analogy for the potential disparate impacts of AI and should serve as useful precedent for future litigation against AI-driven solutions.

Indirect discrimination by AI may occur at two stages. First is the usage of incomplete or inaccurate training data that results in the algorithm processing data that may not accurately reflect reality. Cathy O’Neil explains this using a simple example. There are two types of crimes-those that are ‘reported’ and others that are only ‘found’ if a policeman is patrolling the area. The first category includes serious crimes such as murder or rape while the second includes petty crimes such as vandalism or possession of illicit drugs in small quantities. Increased police surveillance in areas in US cities where Black or Hispanic people reside lead to more crimes being ‘found’ there. Thus, data is likely to suggest that these communities commit a higher proportion of crimes than they actually do – indirect discrimination that has been empirically been shown through research published by Pro Publica.

Discrimination may also occur at the stage of data processing, which is done through a metaphorical ‘black-box’ that accepts inputs and generates outputs without revealing to the human developer how the data was processed. This conundrum is compounded by the fact that the algorithms are often utilised to solve an amorphous problem-which attempts to break down a complex question into a simple answer. An example is the development of ‘risk profiles’ of individuals for the  determination of insurance premiums. Data might show that an accident is more likely to take place in inner cities due  to more densely packed populations in these areas. Racial and ethnic minorities tend to reside more in these areas, which means that algorithms could learn that minorities are more likely to get into accidents, thereby generating an outcome (‘risk profile’) that indirectly discriminates on grounds of race or ethnicity.

It would be wrong to ignore discrimination, both direct and indirect, that occurs as a result of human prejudice. The key difference between that and discrimination by AI lies in the ability of other individuals to compel the decision-maker to explain the factors that lead to the outcome in question and testing its validity against principles of human rights. The increasing amounts of discretion and, consequently, power being delegated to autonomous systems mean that principles of accountability which audit and check indirect discrimination need to be built into the design of these systems. In the absence of these principles, we risk surrendering core tenets of human rights law to the whims of an algorithmically crafted reality.

Share this:

Related Content

0 Comments

Submit a Comment