Menu Close

Robots don’t kill people, it’s the humans we should worry about

Hitting the target: new technology is shaping the nature of international intervention. Photograph courtesy of Royal United Services Institute

This year’s annual report of the UN Special Rapporteur on extrajudicial, summary, or arbitrary executions, Christof Heyns, focuses on what it calls “lethal autonomous robotics and the protection of life”.

Submitted to the UN’s Human Rights Council on April 13 but only publicly released on Thursday, it has predictably given rise to headlines such as “Killer Robots pose threat to peace and should be banned, UN warned” in The Guardian and “Killer Robots should be banned before existence in Asian News International”.

In fact, the report calls for a moratorium on the development and use of “lethal autonomous robotics” (LARs) until the ethical, legal, and operational dimensions of their use can be properly assessed. While acknowledging the possibility that LARs could comply under certain circumstances with the requirements of international humanitarian law and international human rights law it argues:

There is widespread concern that allowing LARs to kill people may denigrate the value of life itself.

As usual in any debate about a topic as emotive as this, there is potential for incensed debate, as the headlines referenced above indicate. Putting the hype to one side, how worried should we be about the potential development of LARs?

Machines have to be programmed to kill

First, it is important to note that the very term “lethal autonomous robotics” is problematic. There is no doubt that machines can deliver lethal force, but in this context autonomy is a relative term. People programme machines and choose how much autonomy to give them. Machines cannot be held accountable for their actions - they are not moral agents – but the people who design, programme and operate them are indeed moral agents and as such should be held accountable for their actions.

The debate should therefore be about what kind of decision-making powers it is acceptable to write into the programmes of machines that can kill people and how those who design and deploy them can be held accountable for the machines they create. We need to know on what basis, and against what criteria, a machine is programmed to deliver lethal force, so that we can judge the culpability or otherwise of its designer and operator.

The international legal framework within which this accountability can be exercised is well established in the body of international humanitarian law and international human rights law, although Heyns and others have argued that it may need some updating to reflect advances in weapons technology – one of the main arguments for the proposed moratorium. If a machine is programmed or operated in a way that breaches the law, there is a clear mechanism for holding those responsible to account – in a domestic jurisdiction or in the International Criminal Court.

Thus the designers of all weapons systems are bound by law to design them in a way that makes them most able to comply with international law, as are those who use them. The level of sophistication of the machine involved does not change this. The same legal principles apply whether the weapon is a rifle, a cruise missile, or a system with a greater degree of autonomy such as that on a main battle tank.

It must also be acknowledged that some weapons are deemed to be so indiscriminate in their effect that they are banned altogether. Examples of this include chemical, biological, radiological and nuclear weapons, as well as anti-personnel mines (as opposed to anti-tank mines, which are deemed to have a legitimate military application). Although these examples show that the nature of the technology is relevant to the legality of its use, this is far less of a factor than the intent of the human designer and operator.

Warfare as a tool of foreign policy

As the recent report published jointly by the Centre for International Intervention at the University of Surrey and The Royal United Services Institute argues, the main danger of an excessive focus on new weapons technology and its capabilities is that it obscures what should be the real debate. This should be about the acceptability of warfare as an instrument of foreign policy and the criteria against which the use of lethal force is justified.

Lethal weapon: US airforce MQ-1L Predator UAV. USAF photo/Lt Col Leslie Pratt

As a US Department of Justice document leaked in February 2013 makes clear, because the US claims it is engaged in a “transnational global conflict” with the “illegal combatants” of al-Qaeda, it considers itself justified in carrying out pre-emptive strikes against targets suspected of posing an imminent terrorist threat to the US. Even targets in areas such as Yemen and Somalia where it is not engaged in a recognised armed conflict.

The criteria for assessing individuals as terrorist targets (any adult male observed in physical proximity to a known terrorist) and the stretching of the definition of “imminence” to a point where it pretty well ceases to have any meaning, create a licence to kill that should make us far more concerned than the development of robotic weapons systems in themselves.

While the image of the “killer robot” may fill us with understandable dread, we have far more to fear from an unthinking use of hard power that may, in the long run, make us all a lot less safe than we are now.

Want to write?

Write an article and join a growing community of more than 182,600 academics and researchers from 4,945 institutions.

Register now