From Magna Carta to Machine Learning: AI Without Borders, Laws Within

by | Jun 18, 2025

author profile picture

About Nancy Siboe

Dr Nancy Siboe (PhD (Law), LLM (International Law), MBA (Strategic Management), LLB(Hons)) is a Programme Leader and Senior Lecturer in Law at the University of the West of England, Bristol, and Fellow of Higher Education, UK. Her research interests include use of artificial intelligence in legal processes, rule of law, use of force and protection of civilians during civil wars.

With the rapidly increasing era of new technology, particularly Artificial Intelligence (AI), a deep paradox exists due to complexity, insufficient precedent, and loopholes in laws that ideally should preserve the rule of law and protect citizens against arbitrary decisions that could be made with this technology.

The rule of law as defined by A.V Dicey refers to supremacy of the law as opposed to arbitrary exercise of powers, equality of all men before the law, and protection of individual human rights of people (Cf Lord Bingham, Joseph Raz and Brian Tamanaha). Legal instruments have long been adopted to  provide frameworks for protection of the rule of law such as the 1215 Magna Carta which introduced the right to a fair trial, protection from arbitrary detention, and the requirement that laws must be applied equally, the writ of habeas corpus securing the presence in court of the defendant or criminal suspect, and the Constitution of the United States of America in operation since 1789, a historical turning point in the development of the rule of law where it sought to create a strong central government whilst maintaining an individual’s autonomy. This later led to the American Declaration of Human Rights and impacted the adoption of the 1789 French Declaration of Man and the Citizen. Globally, there is the Universal Declaration of Human Rights, adopted by the United Nations General Assembly in 1948, which enshrines the rule of law in the international law sphere.  However, with the increasing use of AI in decision making, questions must be asked about how it affects the operation of concepts associated with the rule of law, such as transparency, fairness, and explainability, as this technology is no longer a figment of imagination.

Whilst AI plays a key role in development, its operationalisation should be hinged within the constraints of the rule of law and protection of human rights and values. Debates on AI regulation often do not take into account that AI itself has regulatory effects on the legal system which could potentially endanger the adherence to the rule of law in democratic states. The infrequency of discussions between the intersection of technology and rule of law confirm that whilst AI primarily embraces globalisation, the rule of law, for the most part, is still primarily confined to national borders. Thus, there is need for international collaboration on setting legal and regulatory frameworks to improve transparency and fairness of decisions made by AI.  As of 2023, an AI index analysis of legislative records of 127 countries shows that the number of bills containing ‘artificial intelligence’ that were passed into law grew from just 1 in 2016 to 37 in 2022. An analysis of the parliamentary records on AI in 81 countries likewise shows that mentions of AI in global legislative proceedings have increased nearly 6.5 times since 2016.

Recent developments and global initiatives that have been seen as most influential and well-known include Council of Europe’s Framework on AI, which is the first international legal instrument on AI focusing on alignment with the rule of law, and UNESCO’s G7 tool kit on AI and rule of law. Regionally, the EU Artificial Intelligence Act 2024 focuses on risk levels of AI to ensure transparency and accountability. In the US, a proposed 10-year moratorium on state level AI regulations introduced by the House Energy and Commerce Committee in May 2025, has been met with opposition from bipartisan groups and states that have regulated high-risk uses of the technology, arguing that the measure will pre-empt AI laws and regulations passed recently in dozens of states. The (Data Use and Access) Bill which is at the reporting stage in the United Kingdom’s House of Commons has raised concerns on the narrowed scope of Article 22 under the UK General Data Protection Regulation which could allow more significant AI decisions without meaningful human involvement.

By emphasising shared global responsibility, legal and regulatory frameworks can help mitigate the risks of automation bias and preserve the rule of law.  AI must operate within a clear legal framework and remain subject to judicial oversight to ensure that the rule of law is upheld.

 

Share this:

Related Content

0 Comments

Submit a Comment