“We had one goal: to develop legislation that would ensure that the AI ecosystem in Europe develops with a human-centered approach respecting fundamental rights and European values, building trust, creating awareness, how we can make the most of this AI revolution that is happening before our eyes” said Thierry Breton, the European Commissioner for the Internal Market. On December 8th 2023, the EU agreed on unprecedented legislation to regulate Artificial Intelligence (AI) after intense negotiations between member states and the European Parliament, signifying that a compromise had been reached.
The compromise aims to establish ‘obligations for AI based on its potential risks and its level of impact.’ This provisional agreement aims to ensure that fundamental rights, democracy, the rule of law, and environmental sustainability are protected from high risk AI systems by implementing rules that establish obligations for States authorities and companies using AI based on its potential risks and level of impact. The co-legislators agreed to prohibit certain AI applications and to implement a series of safeguards and exceptions.
The AI Act adopts a ‘risk-based’ approach, classifying AI systems based on their potential harm. This aligns with OECD’s legal instruments, banning AI practices that pose significant risks to civil liberties such as certain predictive policing methods such as data collecting and data modelling; area-based policing; event-based policing or person-based policing. Rigorous rules are further imposed on high-risk AI applications such as AI technology used in transport or educational or vocal training. Nevertheless, it permits exceptions for law enforcement under rigid conditions, maintaining a balance between public safety and personal freedoms.
The AI Act mirrors the GDPR in its approach, potentially reshaping innovation and compliance in AI and Machine Learning across Europe and the broader community. The AI Act aims to establish a global standard in AI regulation, which can be compared to the impact of GDPR. Featuring an AI Office within the European Commission, a governance framework is set up, ensuring uniform enforcement across EU member states. In case of violation of any of the rules of the Act, substantial fines are set down, set as a percentage of the offending company’s global annual turnover.
For companies in the field of cybersecurity, information governance and eDiscovery, the AI Acts aligns AI systems with strict privacy and data protection standards. For example, in the field of eDiscovery, the Act guarantees that AI tools used for legal investigations cohere with transparency and ethical standards.
The co-legislators agreed to “prohibit biometric categorisation systems that use sensitive characteristics such as political, religious, philosophical beliefs, sexual orientation, race); untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases; emotion recognition in the workplace and educational institutions; social scoring based on social behaviour or personal characteristics; AI systems that manipulate human behaviour to circumvent their free will; AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation)”.
With the adoption of this legislation, the EU has become the world’s first government institution to implement AI regulation; but does this Act really recognize the potential threat that certain applications of AI represent to citizens’ rights?
A lot of the data that exists is generated by people, and thus they have rights regarding protection of it and access to it. If data processing is allowed under Article 10(5) of GDPR, Article 9 of GDPR requires the data subject to give ‘explicit consent to the processing of those personal data for one or more specified purposes’. The EU AI Act still fails to elaborate on the conditions for collecting personal data from publicly available sources for mandatory training, validation, and testing of high-risk AIs.
Governments succeeded in negotiating a few exemptions, so that law enforcement could still use them to tackle serious crimes, such as terrorism threats or child sexual exploitation. However, there is still no full ban on public use of face scanning and other ‘remote biometric identification’ systems. Not ensuring a full ban on facial recognition is a missed opportunity to prevent damage to human rights. A compromise must consider concerns about the use of this technology such as biometric identification of citizens. AI is already being used for border surveillance and predictive analytics systems to predict migration trends, and there is therefore a risk that forecast about migration patterns will be used to facilitate pushbacks and pull backs. Therefore, while the Act seems to protect the rights of citizens, it does not include the rights of all people, such as migrants.