Why Artificial Intelligence is Already a Human Rights Issue

by | Jan 31, 2018

author profile picture

About Daniel Cullen

Daniel Cullen is a Research Officer in the Centre for Criminology at the University of Oxford. He holds an LLM in Law from Birkbeck, University of London, and a BA in History and Economics from SOAS, University of London.

Citations


Daniel Cullen, “Why Artificial Intelligence is Already a Human Rights Issue” (OxHRH Blog, 31 January 2018), <https://ohrh.law.ox.ac.uk/why-artificial-intelligence-is-already-a-human-rights-issue> [date of access]

 Last month, UN Special Rapporteur on extreme poverty and human rights Professor Philip Alston released a statement following his official visit to the United States. Beyond many issues around taxation, healthcare and housing, his report evaluated the impacts of new information technologies, including artificial intelligence (AI), on the poorest Americans. One example given was the growing use of privately-developed automated risk assessment systems in pre-trial release and custody decisions, raising ‘serious concerns’ over due process. The report also cited the potential for automation to reduce demand for labour, exacerbating rates of extreme poverty.

The accelerating pace of progress in AI development (driven particularly by the subfield of machine learning) is currently generating a frenzied mix of anxiety and excitement. Public debates between figures such as Elon Musk and Mark Zuckerberg over the threats of ‘superintelligent’ forms of AI have received extensive coverage, while optimists have argued that AI might be directed towards solving pressing global challenges. But these narratives can easily distract from the fact that various AI-related technologies are already in widespread use. Some of these, as Professor Alston’s report highlights, can have distinct implications for human rights today.

Analysis of the intersections between human rights and AI-related technologies has been growing across a range of areas. Perhaps the most prominent have been predictions of significantly decreased employment in a various sectors due to automation. The development of lethal autonomous weapons systems (LAWS) has prompted a backlash from campaigners seeking a pre-emptive ban on so-called ‘killer robots’. Researchers have also identified concerns over the privacy impacts of facial recognition software, the risks of discrimination through replication or exacerbation of bias in AI systems, and the effects of some ‘predictive policing’ methods.

The rights implications of AI technologies have recently begun to feature more directly at the UN Human Rights Council (HRC). During 2017, two formal reports submitted to the HRC discussed these issues. Report A/HRC/36/48 from the Independent Expert on the rights of older persons addressed the opportunities and challenges of robotics, artificial intelligence and automation in the care of older persons. Earlier in the year, report A/HRC/35/9 from the Office of the High Commissioner for Human Rights (OHCHR), on the topic of the gender digital divide, made reference to algorithmic discrimination and bias, and the potential for AI to drive improvements in women’s health.

The emerging relevance of AI issues can also be seen in the work of advocates at the HRC. At the 36th HRC session in September, a group of NGOs and states co-hosted a side event on the topic of ‘Artificial intelligence, justice and human rights’. In an OpenGlobalRights article at the conclusion of the session, Peter Splinter of the Geneva Centre for Security Policy called for HRC member and observer States to become more forward-looking on thematic issues including sophisticated algorithmic systems and future forms of AI, in order for the body to help to shape regulations.

This reflects an increasing engagement with AI from human rights organisations more generally. Amnesty International, for example, this year announced a new initiative on AI within its technology and human rights programme. In the UK, an ongoing House of Lords committee inquiry into policy responses to AI has received submissions from a number of NGOs, including Privacy International, Liberty and Article 19. In its submission, the Human Rights and Big Data Project (HRBDT) at the University of Essex proposed that “a human rights-based approach should sit at the centre of the development and use of artificial intelligence.”

Some now see a rights-based approach as central to the development of ethical forms of AI. Speaking at the AI for Global Good summit in June, Salil Shetty of Amnesty International said: “We strongly believe that enshrining AI ethics in human rights is the best way to make AI a positive force in our collective future.” At present, however, where decisions over ethics are limited to tech-dominated environments, the range of possible human rights implications may not be fully considered. While some industry-based standards have been proposed, the HRBDT submission argues that these are generally not sufficiently comprehensive.

As more advanced AI systems continue to emerge, these will interact – positively and negatively – with an ever-wider range of human activities. As a result, human rights concerns relating to AI are likely to become more commonplace. In the short term, this will require greater research into the complex impacts of these technologies. In the longer term, proposals for a human rights framework to support ‘ethical AI’ may become increasingly attractive. For this to be realised, stronger relationships would have to be formed between the fields of technology and human rights – which have often been disconnected.

Share this:

Related Content

0 Comments

Submit a Comment