Menu Close
Misplaced faith in the possibility of risk-free warfare may end up putting more lives at risk. L.C.Nøttaasen

Lethal autonomous robots must be stopped in their tracks

The topic of killer robots was drawn back into the public sphere last week with the widely publicised call for a moratorium on the development and use of “lethal autonomous robotics” by a top UN human rights expert; and inevitably, this conjured up some familiar concerns.

The opening scenes of James Cameron’s 1984 film The Terminator portray people running for cover beneath ruined buildings while hunter-killer robots circle menacingly overhead. Of course, such images must already have a certain contemporary resonance in Pakistan and Afghanistan, where people live in fear of being killed by a Hellfire missile fired by a Predator or Reaper drone, controlled by operators in the United States.

Yet if people are dying in drone strikes today at least a human being has confronted the question of whether the goals the attack is intended to serve are worth killing them for.

Now that military scientists around the world are working on developing autonomous weapons intended to be capable of identifying and attacking targets without direct human oversight - referred to interchangeably as lethal autonomous robots and killer robots – the scenario Cameron portrays in the first few minutes of his film is perhaps closer than we think.

It’s important to stress here that, currently, such weapons are not employed, although various technologies of this sort are in development. And, while not “autonomous”, the sophistication of certain robotics being trialled for the battlefield, as discussed already on The Conversation, gives some insight into where things may be going.

Last week’s discussion on the ethics of lethal autonomous robots at the UN Human Rights Council followed in the footsteps of a November 2012 Human Rights Watch report, Losing Humanity: the Case Against Killer Robots.

But the military logic driving the rapidly expanding use of drones and the development of autonomous weapons has been obvious for some time. It was because we viewed this prospect with alarm that colleagues and I founded the International Committee for Robot Arms Control at a meeting in the UK in September 2009.

Risks and rewards

The development of autonomous weapons would undermine international peace and security by lowering the domestic political costs of going to war and by greatly increasing the risk of conflicts being triggered by accident.

The fear of the public seeing their sons and daughters return in body bags is the main thing that currently prevents governments from going to war. If governments think they can impose their will on affairs in foreign lands using autonomous weapons there will be little to stop them bombing and assassinating those they perceive as their enemies more often than they already do.

The UN’s Christof Heyns has called for a global pause in the development and deployment of “killer robots”.

Of course, as the invasions of Iraq and Afghanistan demonstrate all too well, wars are easier to start than to finish. Similarly, despite the enthusiasm of the West for fighting wars entirely in other people’s countries, the violence of these conflicts has ways of finding its way home.

The stabbing of a British soldier in Woolwich by two men identifying as Muslims has been widely described as an act of terrorism: as Glenn Greenwald has argued, given that it involved an attack on a member of the British armed services in the context of the UK’s involvement in the war in Afghanistan, one wonders if it might not equally well be thought of as a poor man’s drone strike. Misplaced faith in the possibility of risk-free warfare may end up putting more lives at risk.

When autonomous submarines are circling each other in the Pacific 24 hours a day and autonomous planes are poised to strike strategic targets should some particular set of conditions on a checklist maintained by a computer be met, the risk of accidental war will be all too real.

Stop Killer Robots

The philosophical tradition of just war theory, institutionalised in the law of armed conflict, is one of the key institutions which currently limits the scope and destructiveness of war.

This tradition places severe restrictions on the conduct of war, including regarding who is and is not a legitimate target of attack. Civilians are not legitimate targets, nor are soldiers who have indicated a desire to surrender or who are wounded such that they pose no military threat.

Despite the rapid progress of computer science, I am extremely cynical that machines will be able to make the complex contextual judgements required to reliably meet the requirements of just war theory for the foreseeable future.

There is also a peculiar horror associated with the idea of people being killed by robots, which I have been working to elucidate in my research. Even though they are willing to kill each other, enemies at war are in a moral relationship.

At a bare minimum, they must acknowledge their enemy as their enemy and be willing to take responsibility for the decision to kill them. Robots are unable to offer this recognition themselves and arguably obscure the moral relationship between combatants to such an extent as to call into question the ethics of their use as weapons.

For all these reasons, I applaud the recent launch of the Campaign to Stop Killer Robots announced by a coalition of NGOs in London in April this year and support its goal of a global ban on the development and deployment of lethal autonomous weapons.

Further reading:
Robots don’t kill people, it’s the humans we should worry about
Predators or Plowshares? Arms Control of Robotic Weapons

Want to write?

Write an article and join a growing community of more than 182,700 academics and researchers from 4,947 institutions.

Register now