The 2024 Paris Olympics: AI Mass Surveillance Under the Upcoming EU AI Act

by | Oct 12, 2023

author profile picture

About Esther Jaromi

Esther is a doctoral candidate at Queen Mary University London, where she also completed an LLM in Public International Law and now teaches internet regulation. Her thesis focuses on global social media regulation through international criminal law, aiming to harness technology for positive societal progress. Esther has worked with the European Union Delegation to the United Nations, the UNODC and the European Parliament. She also serves as an All Tech is Human Ambassador and is a UK UN Women Delegate.

Drafting legislation that spans 27 countries has always been a formidable challenge for the European Union’s legislators. The proposed 2003 Artificial Intelligence Act [AI Act] has ushered in a new era of complexity, particularly in addressing the issue of AI-powered mass surveillance. The use of AI-powered mass surveillance raises profound questions about privacy and civil liberties, principles deeply ingrained in the foundations of European Union law. As such, the intricacies surrounding biometric mass surveillance underscore the critical importance of formulating a balanced and nuanced approach in the EU’s AI Act.

In this context, the decision by the French government to employ experimental forms of AI-powered mass surveillance for the 2024 Olympic Games has already ignited a challenging and evolving legal discussion. The proposed expansion of surveillance powers and tools at the 2024 Paris Olympics is extensive and unprecedented. These include large-scale, real-time camera systems supported by AI algorithms. The expansion seeks to monitor various aspects of public behaviour, including identifying unsupervised luggage and crowd movements, which raises critical questions about how citizens’ activities will be surveilled.

The use of algorithms to identify suspicious behaviour adds another layer to the surveillance system. While proponents argue that it can enhance security, critics worry that algorithmic surveillance can lead to false positives, the amplification of potential biases, and privacy infringements without proper oversight and regulation.

After the French National Assembly approved AI video surveillance for security purposes at the Paris 2024 Olympics and Paralympics, France’s constitutional court backed using AI-powered surveillance cameras during the event. The French government has stated that these new surveillance powers are intended to be temporary and specifically for the duration of the Olympics. However, concerns about these powers’ potential extension or abuse beyond the event have fuelled apprehension among civil rights advocates and citizens. Many are concerned that if such extensive and intrusive surveillance measures are approved for the Olympics, it could set a dangerous precedent for normalising mass surveillance in other contexts, thus eroding privacy and civil liberties.

The French Court’s decision regarding the Paris Olympics coincides with ongoing deliberations on the AI Act, a ground-breaking piece of legislation set to shape global AI regulation through the infamous ‘Brussels Effect.’ The EU has now entered the “Trilogue” stage, where the AI Act’s final version is negotiated between the EU Parliament, EU Commission, and Council. Monitoring the Trilogue process is particularly interesting as it may challenge the EU Parliament’s strong emphasis on protecting individuals by prohibiting biometric surveillance, emotion recognition, predictive policing, and social scoring in public spaces and in surveillance technology.

Using AI-powered mass surveillance during the Paris Summer Olympics could significantly challenge the proposed AI Act. While the AI Act is intended to set comprehensive regulations for AI technology across the EU, the specific case of the Paris Games raises several concerns:

  1. Privacy and Surveillance: The AI Act strongly emphasises protecting individual privacy and ensuring that AI technologies are used responsibly. The use of mass surveillance, especially if it involves facial recognition or other biometric data processing, may raise concerns about privacy infringements and surveillance overreach once the AI Act is adopted.
  2. Algorithmic Accountability: The AI Act also emphasises the need for transparency and accountability in AI systems. The use of AI algorithms in mass surveillance systems, as seen in the case of the Paris Olympics, could raise questions about the transparency and accountability of these algorithms, particularly if they are used to identify and monitor individuals in public spaces.
  3. International Implications: The Paris Olympics are an international event, and the use of AI-powered surveillance may involve data collection and monitoring of individuals from various countries. This raises questions about the extraterritorial impact of such surveillance and whether it aligns with international data protection and privacy standards.

The developments surrounding the AI Act and the Paris Summer Olympics are of paramount significance and warrant close attention by the global legal community. As the EU Parliament, Commission, and Council negotiate the final version of the AI Act, and as the French government experiments with advanced surveillance technologies for the Olympics, we can anticipate monumental legal battles on the horizon.

Activists, politicians, and academics across the EU and the UK have already sounded alarms about Paris 2024. The outcome of the anticipated legal tension triggered by the Olympics will undoubtedly have a profound impact on the course of AI surveillance in the twenty-first century.

Want to learn more?


Share this:

Related Content


Submit a Comment