Machine Decision-making in the criminal justice system: The FATAL4JUSTICE? Project

by | Apr 8, 2019

author profile picture

About Karen Yeung and Milla Vidina

Prof Karen Yeung is the University of Birmingham’s first Interdisciplinary Chair, taking up the post of Interdisciplinary Professorial Fellow in Law, Ethics and Informatics at the University of Birmingham in the School of Law and the School of Computer Science in January 2018. She has been a Distinguished Visiting Fellow at Melbourne Law School since 2016. Milla Vidina is Senior Policy Officer at Equinet, the European Network of Equality Bodies.

Citations


Karen Yeung “Machine decision-making in the criminal justice system: The FATAL4JUSTICE? Project” (OxHRH Blog, 8 April 2019) <https://ohrh.law.ox.ac.uk/machine-decision-making-in-the-criminal-justice-system-the-fatal4justice-project> [Date of Access]

The capacity to collect, store and process digital data in real-time on cloud servers, and to subject vast data sets to train and feed machine learning algorithms, have enabled the development of machines capable of making decisions across an almost limitless array of applications. These ‘algorithmic decision-making’ (ADM) systems range from systems that offer guidance on which consumer products to purchase, deciding whether to grant (or refuse) an individual’s application for a loan, job or university place, through to those designed to direct the speed, direction and movement of autonomous vehicles.  Although it is frequently claimed that machines are ‘better’ decision-makers than humans, owing to their capacity for consistency, speed and alleged objectivity, in recent years the use of ADM systems has come under sustained criticism and examination from academic researchers and investigative journalists. 

The FATAL4JUSTICE? Project, funded by the VW Stiftung Foundation, seeks to deepen and enrich our understanding of the influence of ADM systems on future society by critically investigating ADM systems in the criminal justice system from multiple and intersecting disciplinary perspectives (including law, computer science, neuropsychology and political science). The criminal justice system is an ideal sphere of examination for several reasons.

Firstly, there are many different ADM systems currently used within the criminal justice system (primarily in the US but increasingly in the UK), operating at various points within the legal process, and making different kinds of decisions (including decisions about whether to prosecute, to grant bail, to determine eligibility for parole and to determine the length and type of sentence).

Secondly, these systems are used to inform ‘high stakes’ decisions – with profound consequences for the affected person, by potentially depriving individuals of their fundamental rights (e.g. rights of due process, the presumption of innocence, and the right to liberty). Thirdly, the making of highly consequential decisions about the treatment of individuals has always been a critical function of the criminal justice system, giving rise to a rich and extensive body of experience and academic reflection, by reference to which the use of ADM systems can be assessed.

And finally, decision-making about individuals accused or suspected of criminal wrongdoing is notoriously difficult, not least because the legal standards and their interpretation and application are imprecise and constantly evolving due to the dynamic social context of their operation.  Accordingly, an examination of how ADM systems within the criminal justice system arrive at decisions about individuals may provide insight into the extent to which they may (or may not) fairly be regarded as ‘superior’ to human decision-making.

One of the primary aims of this project is to identify general situations in which the context of the decision is so highly relevant that a machine should not (given the current state of the technology) make automated decisions without active human oversight. We will therefore seek to develop general guidelines that can assist in determining the conditions under which humans should make decisions, when and how they could be supported by machines, and when machines alone are best suited to make decisions.  These guidelines will provide the foundation for identifying principles to inform how best to distribute decision-making authority between humans and machines more generally.

As a legal scholar concerned with understanding the larger social, legal, ethical and democratic implications of the ‘computational turn’, the primary focus of my research stream is to investigate whether the use of ADM systems in the UK and USA is consistent with the foundational principles of the criminal justice system. These principles, understood as grounded in the obligation of states to ensure respect for human rights (which also provides a structured framework for the resolution of conflicting rights and values) and the rule of law, will provide the normative framework against which the ADM systems currently in use in the criminal justice context will be evaluated.  However, my research stream will also link directly with those lead by my collaborators in computer science, and in political science, working together on a series of joint projects.

Taken together, all four project streams (including two projects lead by legal scholars) will focus on analyzing how humans decide about the construction and implementation of ADM systems; how ADM systems make automated decisions i.e. how humans decide by letting an ADM system decide; and how humans make decisions together with an ADM system. The project will run for four years from May 2019.

I am currently seeking to recruit a Research Associate for this project (closing date 10 April 2019) see here.

For more information about the project, see here.

Share this:

Related Content

0 Comments

Submit a Comment