Playing with the Brain: The AI Act in the Age of Neurotechnology

by | Oct 24, 2025

author profile picture

About Nirmalya Chaudhuri

Nirmalya Chaudhuri is a legal researcher based in India. He holds a BA LLB (Hons.) degree from the West Bengal National University of Juridical Sciences, India, and an LLM degree from the University of Cambridge. He may be contacted at nirmalyac08@gmail.com.

Article 5 of the Artificial Intelligence (AI) Act in the European Union (EU) came into force on 2 February 2025. A provision that is of considerable interest is Article 5(1)(a) of the AI Act, which prohibits the deployment and use of AI systems that can unconsciously manipulate or alter a person’s behaviour and hamper their ability to make informed choices, leading to the reasonable likelihood of significant harm. Recital 29 of the AI Act specifies that such techniques may include “machine-brain interfaces”, a set of technical advancements that are colloquially referred to as “neurotechnology”. In this post, it is argued that the complete prohibition on use of AI systems falling within Article 5(1)(a) is a logical culmination of the unqualified protection that is given to protection of “thought” from external intervention in human rights jurisprudence. 

What is interesting to note is the fact that these techniques have been subjected to a complete prohibition, and have not been made contingent on the consent of the data subject. The only exception, under Recital 29, applies to the use of medical techniques deployed with explicit consent, e.g. the use of Deep Brain Stimulation to cure neurological disorders. Such a blanket prohibition differs, for instance, from the framework for processing of special categories of personal data under Article 9 of the General Data Protection Regulation (GDPR), which can still be processed with informed consent. Assuming that the use of algorithmic systems envisaged under Article 5(1)(a) of the AI Act involves processing of data on cognitive and behavioural states of data subjects, it is apparent that the AI Act possibly carves out a special category of data derived from a person’s internal decision-making processes that is afforded even higher protection than that ordinarily available under Article 9 of the GDPR. 

The reasons for providing greater protection to data relating to a person’s cognitive and affective states appear to be obvious: as the Information Commissioner’s Office (ICO) in the UK notes (p. 19), consent in the realm of “neurotechnology” may be an inadequate safeguard when it comes to ensuring control over one’s personal data. In a normal situation, by providing consent to processing of personal data, the data subject retains conscious control over what items of personal data are being divulged. When neurotechnology itself manipulates and modifies the data subject’s cognitive and behavioural functioning by directly modulating electrical activity or artificially stimulating certain areas of the brain,  the “voluntary” and “informed” nature of consent becomes suspect. Further, reports have emerged of neurotechnological advancements that purport to have the ability to convert brain scans into transcribed speech in real time. In the face of such technology, even if initially consent is provided, one has no control over what kinds of data may be unwittingly disclosed to the outside world, let alone determine the inferences that may be revealed from them. 

From a rights perspective, the prohibition in Article 5(1)(a) is itself an expression of the unqualified protection that is afforded to unexpressed “thought” or theforum internum” in the European Convention on Human Rights (ECHR). Under Article 9(1) of the ECHR, protection is granted to “freedom of thought, conscience and religion”. However, while Article 9(2) envisages limitations on manifestation of one’s religion or beliefs, the provision is silent on any possible limitations on the freedom of “thought”. One possible interpretation of this deliberate wording points to the fact that unexpressed “thought” or the forum internum under the ECHR is given an inviolate and absolute status, free from any potential limitations or restrictions. This can explain why under Article 5(1)(a) of the AI Act, technological advancements that tinker with “thought” at its most basic level, namely, the cognitive process of thinking, are not permitted even with consent, subject to the medical use exception. 

While technological advancements ought to be welcomed, it is imperative to ensure that the freedom to preserve one’s innermost thoughts and the autonomy to make choices are not tinkered with. Viewed from this angle, Article 5(1)(a) of the AI Act acts as a guardrail that effectively safeguards the inviolability of “thought” from neurotechnological intervention. 

Share this:

Related Content

0 Comments

Submit a Comment