Dutch Court Provides Valuable Precedent for Human Rights in the Digital Welfare State

by | Mar 26, 2020

author profile picture

About Divij Joshi

Divij Joshi is a lawyer and an independent researcher focusing on the relationships between law, technology and political economy. He is presently working on public interest technology policy as a Mozilla Fellow, and also edits and contributes to the SpicyIP Blog.

Citations


Divij Joshi, ‘Dutch Court Provides Valuable Precedent for Human Rights in the Digital Welfare State’, (OxHRH Blog, March 2020), <https://ohrh.law.ox.ac.uk/dutch-court-provides-valuable-precedent-for-human-rights-in-the-digital-welfare-state>, [Date of access].

Even as the use of technologies utilising ‘artificial intelligence’ and ‘big data’ proliferates in the public sector, their compatibility with human rights norms and constitutional rights has been circumspect. Investigations into the ‘digital welfare state’ reveal that the implementation of automated decision-making tools within essential government services is prejudicing the human rights of welfare dependents. In this context, a recent judgment of the Hague District Court, striking down the Dutch government’s use of an automated ‘risk scoring’ system, is an important precedent for evaluating and challenging these systems.

In this case, the Dutch government had implemented a risk-classification technology called System Risk Indicator (“SyRI”). The SyRI system allowed data across government databases to be collated for the purpose of generating ‘risk reports’ – using personal data to classify individuals according to the possibility of risk they posed to commit fraud. These reports could be utilised in investigations against certain individuals suspected of committing benefits or tax fraud. A coalition of civil society groups and individuals challenged the Dutch Government in the Hague District Court, arguing that SyRI violated human rights protections, as well as the General Data Protection Regulation (“GDPR”).

The Hague Court categorically struck down the SyRI system on the grounds that it violated the right to privacy enshrined in Article 8 of the European Convention on Human Rights (“ECHR”). While agreeing that SyRI operated for a legitimate purpose and with a lawful basis, the court held that the use of data in the SyRI system did not strike a ‘fair balance’ between such purpose and the interference with the right to privacy of affected individuals. Specifically, the court noted that the implementation of SyRI did not provide adequate safeguards for the protection of personal data against abusive and arbitrary practices by the operators of the system.

The lack of transparency within the SyRI system was central to the court’s conclusion. The court noted that the SyRI system, including the legislation which enabled it, did not sufficiently describe the personal data which could be collected, the purposes for which such data may be used, or the manner in which it may be processed. This opacity meant that individuals neither had sufficient notice nor adequate information in order to verify the claims of, or defend against the use of a prejudicial risk report made by the use of SyRI. Additionally, the court could also not evaluate the possibility of illegal systemic discrimination occurring within the system, as alleged by the petitioners. In conclusion, the lack of transparency and verifiability of SyRI meant that it did not meet the threshold of necessity and proportionality required by Article 8 of the ECHR.

A report by the United Nations Special Rapporteur on Extreme Poverty documents the proliferation of technologies like SyRI, which mediate the citizen-state relationship through automated decision-making. The report reveals that, behind the glamour of technologies packaged as tools for improving ‘efficiency’ and overcoming human errors, the digital welfare state is emerging as a new challenge to overcoming systemic and structural causes of poverty. Instead of addressing structural failures, the implementation of these technologies risks undermining crucial protections offered to vulnerable populations by discouraging access to welfare, as well as legitimising opaque and discriminatory government decisions.

In this context, the Hague District Court’s judgment should be considered a valuable precedent for centering human rights in the discourse on artificial intelligence and ‘data-driven’ technologies. The welfare state must be held to a higher standard of responsibility when implementing technologies that rely extensively upon the automated processing of personal data, and must ensure that the efficiencies that  digital technologies allow are not used to undermine the privacy, security and dignity of its citizens.

Share this:

Related Content

0 Comments

Submit a Comment