There are growing concerns about the impact of technology on mental autonomy: our ability to make up our own minds free of undue influence, to keep our thoughts private and not to be penalised for our views. Concerns are wide-ranging and relate to the impact of emotion recognition technology on privacy of thought, the impact of artificial intelligence on decision-making, the impact of disinformation and other social media manipulation on voting decisions, and the impact of neuro science on the privacy and autonomy of our minds.
But conceptualising “mental autonomy” is challenging. What does it mean to be autonomous when we are all products of our upbringing, education and society? Is autonomy a question of an individual’s state of mind, or is it a matter of the actions they choose to take or communications they choose to have? How can the impact of technology on mental autonomy be determined? As regards law and public policy, if legislators wished to protect mental autonomy from potential impacts of technology, what would be the most pragmatic way of doing so?
In this context, human rights lawyers have begun to explore the meaning and implications of freedom of thought and freedom of opinion, rights which had been overlooked until recently. They are exploring what limits these rights impose on technological techniques and tools, and how the law ought to take account of those rights in regulating technology. In this vein, the Bonavero Institute of Human Rights at the University of Oxford recently hosted an interdisciplinary group of researchers each working on aspects of mental autonomy. The discussion focused on the meaning and significance of mental autonomy in different disciplines, and how legal regulation might be informed by insights from different disciplines as well as how other disciplines (such as ethics) can go further than the law in outlining protective frameworks.
Among the participants at the Bonavero roundtable, psychologists are recording individuals’ brain activity while they make decisions, and using brain-stimulation tools to explore how changes to the decision-making part of the brain have an impact on the decision-making process. Philosophers are exploring the ethics of influencing individuals’ minds and behaviour and scoping whether and with what limits there may exist a right of mental integrity, akin to the established right of bodily integrity that entails a requirement for consent before interference. They are researching whether and to what extent there is a link between autonomy, transparency and resistability in the context of digital ‘nudges’. They are also investigating the risks that artificial intelligence systems may pose to human autonomy, and how those risks should be guarded against. In all of these disciplines there is a shared perception that mental autonomy is threatened by technology; but there is not yet a lexicon of how to measure or identify that threat, nor is there enough evidence to determine the extent of the threat and whether it is sufficiently grave to merit regulation. All participants agreed that there is a need for more empirical research, and to link empirical findings to regulatory proposals and frameworks.
The meeting noted that there are currently some legal protections for mental autonomy, but these do not necessarily focus on mental state. For example, the EU Consumer Rights Directive fixes a ‘cooling off’ period during which consumers may cancel certain contracts, such as remote purchases. Consumers also have rights in the face of contractual misrepresentation. Subliminal advertising is not permitted in some countries. Duress, a state of impaired mental autonomy, is a defence to certain criminal offences, while victims have a right to compensation for mental harm in some circumstances, for example due to stalking. By analogy with these examples, one open question is whether regulation to restrict technological interference with autonomy should focus directly on its impact on mental states, or on other features of the technology. For example, it is possible that many of the risks might be mitigated by the setting of appropriate limits on algorithmic processes. Another question is whether different degrees of interference may be permissible in different environments: for example, should there be different limits on ‘nudge’ technology when used for political persuasion or life decisions, compared to commercial advertising?
Overall, this topic raises some of the most significant and pressing issues of our time: how to take the best from technology without unwittingly sacrificing our fundamental freedoms of thought and opinion. It is a hugely fertile area for future research, and the authors look forward to future inter-disciplinary collaboration among a broader group in future.