• Home
  • About us
  • People
  • Blog
  • News
  • Video
  • Webinars
  • Seminars
  • Podcasts
  • Publications
    • Journal Articles
    • Working Papers
    • OxHRH Annual Report
    • Books & Chapters
    • U of OxHRH Journal
  • Events
  • Journal
  • GDPR Compliance
  • Home
  • Home OHRH
  • Media
  • Search
  • Test page
  • Publications
  • About us
  • News
  • A big page
  • Contact
  • Disclaimer
  • Site Map
  • Legal
  • Event archive
  • Blog
    • Comments Policy
    • Contribute to the Blog
  • Events
  • Journal
  • People
  • publications test
  • Publications New
    • Inner Publications Landing
  • #16346 (no title)
Oxford Human Rights Hub logo
  • Home
  • About us
  • People
  • Blog
  • News
  • Media
  • Events
  • Publications
  • Journal

Why Artificial Intelligence is Already a Human Rights Issue

Daniel Cullen - 31st January 2018
OxHRH
Media Privacy and Communications
Image Credit: ITU Pictures via Flickr, used under a Creative Commons license available at https://creativecommons.org/licenses/by/2.0/

 Last month, UN Special Rapporteur on extreme poverty and human rights Professor Philip Alston released a statement following his official visit to the United States. Beyond many issues around taxation, healthcare and housing, his report evaluated the impacts of new information technologies, including artificial intelligence (AI), on the poorest Americans. One example given was the growing use of privately-developed automated risk assessment systems in pre-trial release and custody decisions, raising ‘serious concerns’ over due process. The report also cited the potential for automation to reduce demand for labour, exacerbating rates of extreme poverty.

The accelerating pace of progress in AI development (driven particularly by the subfield of machine learning) is currently generating a frenzied mix of anxiety and excitement. Public debates between figures such as Elon Musk and Mark Zuckerberg over the threats of ‘superintelligent’ forms of AI have received extensive coverage, while optimists have argued that AI might be directed towards solving pressing global challenges. But these narratives can easily distract from the fact that various AI-related technologies are already in widespread use. Some of these, as Professor Alston’s report highlights, can have distinct implications for human rights today.

Analysis of the intersections between human rights and AI-related technologies has been growing across a range of areas. Perhaps the most prominent have been predictions of significantly decreased employment in a various sectors due to automation. The development of lethal autonomous weapons systems (LAWS) has prompted a backlash from campaigners seeking a pre-emptive ban on so-called ‘killer robots’. Researchers have also identified concerns over the privacy impacts of facial recognition software, the risks of discrimination through replication or exacerbation of bias in AI systems, and the effects of some ‘predictive policing’ methods.

The rights implications of AI technologies have recently begun to feature more directly at the UN Human Rights Council (HRC). During 2017, two formal reports submitted to the HRC discussed these issues. Report A/HRC/36/48 from the Independent Expert on the rights of older persons addressed the opportunities and challenges of robotics, artificial intelligence and automation in the care of older persons. Earlier in the year, report A/HRC/35/9 from the Office of the High Commissioner for Human Rights (OHCHR), on the topic of the gender digital divide, made reference to algorithmic discrimination and bias, and the potential for AI to drive improvements in women’s health.

The emerging relevance of AI issues can also be seen in the work of advocates at the HRC. At the 36th HRC session in September, a group of NGOs and states co-hosted a side event on the topic of ‘Artificial intelligence, justice and human rights’. In an OpenGlobalRights article at the conclusion of the session, Peter Splinter of the Geneva Centre for Security Policy called for HRC member and observer States to become more forward-looking on thematic issues including sophisticated algorithmic systems and future forms of AI, in order for the body to help to shape regulations.

This reflects an increasing engagement with AI from human rights organisations more generally. Amnesty International, for example, this year announced a new initiative on AI within its technology and human rights programme. In the UK, an ongoing House of Lords committee inquiry into policy responses to AI has received submissions from a number of NGOs, including Privacy International, Liberty and Article 19. In its submission, the Human Rights and Big Data Project (HRBDT) at the University of Essex proposed that “a human rights-based approach should sit at the centre of the development and use of artificial intelligence.”

Some now see a rights-based approach as central to the development of ethical forms of AI. Speaking at the AI for Global Good summit in June, Salil Shetty of Amnesty International said: “We strongly believe that enshrining AI ethics in human rights is the best way to make AI a positive force in our collective future.” At present, however, where decisions over ethics are limited to tech-dominated environments, the range of possible human rights implications may not be fully considered. While some industry-based standards have been proposed, the HRBDT submission argues that these are generally not sufficiently comprehensive.

As more advanced AI systems continue to emerge, these will interact – positively and negatively – with an ever-wider range of human activities. As a result, human rights concerns relating to AI are likely to become more commonplace. In the short term, this will require greater research into the complex impacts of these technologies. In the longer term, proposals for a human rights framework to support ‘ethical AI’ may become increasingly attractive. For this to be realised, stronger relationships would have to be formed between the fields of technology and human rights – which have often been disconnected.

Author profile

Daniel Cullen is currently studying for an LLM in Law at Birkbeck, University of London. He is a graduate of the School of Oriental and African Studies, University of London.

Citations

Daniel Cullen, “Why Artificial Intelligence is Already a Human Rights Issue” (OxHRH Blog, 31 January 2018), <http://ohrh.law.ox.ac.uk/why-artificial-intelligence-is-already-a-human-rights-issue> [date of access]

Comments

  1. Pingback: AI & Global Governance: Supporting the ties that bind - Centre for Policy Research at United Nations University

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Related blog posts

Monitoring of employees’ electronic communications: Recent case-law in Turkey
Supreme Court of Pakistan grants federal government the power to arbitrarily restrict mobile services
Covid-19 and the Indian Supreme Court’s refusal to lift internet restrictions in Kashmir

Related news

Wadham Human Rights Forum: Free Speech After Paris

Contact Us

oxfordhumanrightshub@law.ox.ac.uk

Oxford Human Rights Hub
The Faculty of Law, University of Oxford,
St Cross Building,
St Cross Road,
Oxford OX1 3UL

© 2021 Oxford Human Rights Hub | Site by One


Sign up for the OHRH Newsletter

Your email address*:

New email sign up
reCAPTCHA
* Find out how we use your data