Killer Robots in the Cross-hairs of New NGO Campaign

by | Apr 29, 2013

author profile picture

About Guest contributor

Colin Harvey is Professor of Human Rights Law at the School of Law, Queen’s University Belfast|Colin Harvey is Professor of Human Rights Law at the School of Law, Queen’s University Belfast|Colin Harvey is Professor of Human Rights Law at the School of Law, Queen’s University Belfast|Colin Harvey is Professor of Human Rights Law at the School of Law, Queen’s University Belfast

By Conor Fortune

It has all the trappings of a sci-fi film.

A life-size, talking robot stands outside Britain’s Houses of Parliament, flanked by a ponytailed professor of robotics, a Nobel Peace Prize winner and human rights activists who are together calling for action to stop the killer robots – before it’s too late.

Only this gathering, which actually did take place in London this week, had a very serious message to share with the world.

The new Campaign to Stop Killer Robots, launched this week, brings together a diverse group of organisations from around 10 countries who share the common goal of pre-emptively prohibiting the development and use of fully autonomous weapons. 

They hope to build on the strategies and the know-how of previous successful campaigns to ban landmines and cluster munitions – in which coalitions of members of the public (including victims of those indiscriminate weapons), civil society and like-minded governments came together to create new international treaties to ban the production, use, sale and transfer of a whole category of weapon. Although certain key military powers have yet to join those treaties, the stigma they created has been strong enough to raise standards and change behaviours. Use of landmines or cluster bombs is now met with widespread global outcry.

In countries around the world, research and development is already under way to build the weapons of tomorrow, which would take humans “out of the loop” on life-and-death decisions on the battlefield.

Some militaries are already operating semi-autonomous weapons systems with humans still “in the loop” – think unmanned aerial vehicles like drones, which are controlled remotely and can be weaponized – that often have a deadly impact on civilians.

The campaigners’ worry is that by taking human judgment – in terms of morals as well as military utility – completely out of the loop, the civilian risk posed by fully automated weapons is significantly worsened.

Robots, they say, simply cannot be programmed to make the type of judgment call necessary to apply the principles of International Humanitarian Law – the laws of war. There is also a potential fully autonomous weapons could be used in a variety of law enforcement situations outside of armed conflict, where they could pose a real human rights risk.

“Unless we draw a line in the sand now, we may end up sleepwalking into the acceptance of fully autonomous weapons,” says Thomas Nash, director of the British NGO Article 36.

The thought of an arms race to develop and deploy killer robots is “frightening” says Steve Goose, director of the Arms Division at Human Rights Watch. 

According to Goose, the goal of the new campaign is to create a new global standard to pre-emptively prohibit a “form of warfare that should never come into existence”. 

 Jody Williams – awarded the Nobel Peace Prize in 1997 for her work on the International Campaign to Ban Landmines – agrees, and points out that public opinion is already pitched against the use of fully autonomous killer robots in warzones.

But she notes that there will be fierce opposition from weapons producers, who stand to gain the most from developing such weapons. 

Perhaps surprisingly, even many military officials have mixed feelings about the use of fully autonomous weapons in warfare, says University of Sheffield Professor of Robotics Dr. Noel Sharkey. 

Not only would they diminish from a sense of virtue and valour cultivated by armed forces, but they also raise serious questions about accountability – for example, who in the line of command should be held responsible when an autonomous weapons malfunctions?

 “Delegating the decision to kill to robots is morally wrong. To not prevent them is morally outrageous,” Sharkey concludes.

This week’s hearings before the Senate Judiciary Subcommittee on the Constitution, Civil Rights and Human Rights have reflected the nuances of this debate. The Subcommittee has received testimony that even current drone use (a separate but related issue to the Campaign to Stop Killer Robots) has an effect on public opinion, allied support, and perceptions of morality, regardless of whether—as Retired Col. Martha McSally suggests—the lawyers are watching. The prospect of fully autonomous weapons threatens to re-shape the IHL landscape and intensify discourse on complex robotic ethics.

Conor Fortune is a London-based human rights activist and freelance journalist.

Share this:

Related Content

0 Comments

Submit a Comment