Robotic Roguery: Analysing the Legality of Autonomous Weapons (Killer Robots) vis- à-vis Principles of International Humanitarian Law

by | Jan 21, 2020

author profile picture

About Amogh Pareek

Amogh Pareek is a third-year student at National Law University, Jodhpur, India pursuing the B.B.A., LL.B. (Hons.) course.

Citations


Amogh Pareek, “Robotic Roguery: Analysing the Legality of Autonomous Weapons (Killer Robots) vis- à-vis Principles of International Humanitarian Law” (OxHRH Blog, 2019) <https://ohrh.law.ox.ac.uk/robotic-roguery:-analysing-the-legality-of-autonomous-weapons-(killer-robots)-vis- à-vis-principles-of-international-humanitarian-law> [Date of Access].

The outrage against autonomous weapon systems (“Killer Robots”) is at an all-time high. Recently, the UN Secretary-General has called for an international ban on Killer Robots, calling their use “morally despicable”. Additionally, Mary Wareham, the Director of the Human Rights Watch advocated a similar view and described Killer Robots as “one of the most pressing threats to humanity”. Globally, over 22 nations, 116 AI and robotics companies and over 3000 robotics experts and scientists including Elon Musk and Stephen Hawking have urged the UN to ban Killer Robots. Amidst such uproar, the question concerning legality of autonomous weapons gains prime importance.

A Legal Perspective

The International Committee of the Red Cross (“ICRC”) defines autonomous weapon systems as “weapons that independently search for, identify and attack targets without human intervention.” Given their highly autonomous nature, they violate a host of international humanitarian law principles.

First, Killer Robots violate the Martens Clause, which prescribes that in cases not covered by treaties and traditional customary international law (such as the case of Killer Robots wherein there is no governing law or treaty), principles of humanity and dictates of public conscience will apply. While ‘principles of humanity’ refer to the principles of distinction and proportionality (discussed below), ‘dictates of public conscience’ are determined by looking at public and expert opinions. As already stated above, use of Killer Robots is opposed not only by the general public including 20 Noble Peace Laureates, over 60 NGOs, and 160 religious leaders, but also by 3000 robotics experts and scientists – clearly establishing that the use of Killer Robots is illegal according to the Martens Clause.

Second, the use of Killer Robots violates the principle of distinction, which requires autonomous weapons to have an ability to distinguish between combatants and non-combatants. It is enshrined in Articles 48, 51, 52, 53, 54, and 57 of the Additional Protocol-I to the Geneva Convention – all of which reinforce that parties to a conflict must at all times distinguish between civilians and military, and attack only the latter. Given their highly autonomous nature, Killer Robots cannot make a fine distinction between military and civilians. The problem is compounded since they’re pre-programmed, and human intervention in decision making is absent.

Third, use of Killer Robots violates the principle of proportionality, which prohibits attack against legitimate military targets if the collateral civilian harm is excessive. As mandated by Articles 51(5)(b) and 57(2), it requires a subjective assessment of the battlefield to ensure that harm to civilians is minimized. However, Killer Robots fail to make such assessment, making their use more likely to result in much greater human cost, as they don’t possess the capability to take into account the plethora of variable factors necessary to be considered before making an attack – such as the number of civilians in the area and the effect of attack on enemy target on them.

Fourth, use of Killer Robots disregards the principle of military necessity. It states that only such degree of force should be used, which is required for the legitimate purpose of the conflict. It prohibits inflicting such destruction or injury which is unnecessary for the reasonable purposes of the conflict. However, Killer Robots are incapable of making such decisions. For instance, they cannot determine if an enemy shot by them has merely been knocked down on the ground, faking an injury, or has become direly wounded and is no longer a threat. Due to such inability, the robot may unnecessarily shoot the person a second time, thereby disregarding the principle of necessity.

Conclusion

The combination of the factors outlined above makes it clear that the current state of autonomous weapon technology is plagued with a myriad of issues including moral, ethical and legal in nature. Moreover, the existing legal framework suffers from great infirmities in adequately providing for a comprehensive regime governing the use of autonomous weapons. For these very reasons, it becomes imperative that until the time the gap between the state of autonomous weapon technology and the laws governing them is abridged, and a competent legal framework is derived, States and international organizations should resolve to put a pre-emptive ban on the production and use of Killer Robots.

Share this:

Related Content

0 Comments

Submit a Comment