Menu Close
Who gets to fire the gun? Man or AI-powered machine? Flickr/Robot flingueur, CC BY-NC-ND

We need to keep humans in the loop when robots fight wars

Imagine a swarm of tens of millions of armed AI-piloted hexacopters, “killer robots” as some call them, sent to wipe out a particular group of people – say, all men of a certain age in a certain city.

Sounds like science fiction but it was a scenario raised by Stuart Russell, a professor of artificial intelligence (AI), as part of a debate on robots in war at the World Economic Forum in Switzerland last week.

This swarm, he claimed, could be developed in about 18 to 24 months with Manhattan Project style funding. One person could unleash a million weaponised AIs and humans would have virtually no defence.

Sir Roger Carr, chairman of weapons manufacturer BAE Systems, tactfully described Russell’s vision as “extreme”.

But Sir Roger did come out strongly in favour of keeping humans in the loop in the design of autonomous weapons as a means of maintaining “meaningful human control”. An “umbilical cord” between a human and the machine was necessary, he said. Responsibility for the actions of the machine and compliance with the laws of war should be assigned to the human not the machine.

Carr said the weapons business is more heavily regulated that any other industry. He stressed it was not his role to be an advocate for equipment. Rather, his role was to build equipment to government specifications and requirements.

Even so, he was emphatic that autonomous weapons would be “devoid of responsibility” and would have “no sense of emotion or mercy”. It would be a bad idea, he said, to build machines that decided “who to fight, how to fight and where to fight”.

Humans in, on and off the lethal loop

One of BAE’s research projects is a remotely piloted stealth fighter-bomber, Taranis. This could plausibly evolve into a “human off the loop” weapon – if the UK government specified that requirement.

Look! No pilot on board.

There is always the risk that under combat conditions the satellite link from the human to the machine could fail. The “umbilical cord” could snap. It is not clear how Taranis would behave in this circumstance.

Would it loiter and await reestablishment of its signal? Would it return to base? What would it do if attacked? Such details will need to be clarified sooner or later.

Angela Kane, a former UN High Representative for Disarmament Affairs, speaking in the debate, characterised progress in negotiations under the Convention on Certain Conventional Weapons (CCW) as “glacial”. Definitions remain elusive.

After UN Expert Meetings in 2014 and 2015, the meanings of “autonomous”, “fully autonomous” and “meaningful human control” remain disputed.

Policy loop and firing loop

There are two distinct areas in which one might want to assert “meaningful human control” of autonomous weapons:

  1. the definition of the policy rules that the autonomous weapon mechanically follows
  2. the execution of those rules when firing.

Current discussions focus on the latter – the execution of policy in the firing loop (select to engage). The widely accepted terms are “in the loop”, “on the loop” and “off the loop”. Let me explain how the three different terms apply in practice.

Contemporary drones are remote controlled. The robot does not decide to select or engage; a human telepilot does that. The Raytheon Patriot anti-missile system is a “human in the loop” system. Patriot can select a target (based on human defined rules) but will not engage until a human presses a button to confirm.

Raytheon’s Phalanx, a defensive “close-in weapons system” (CIWS) designed to shoot down anti-ship missiles, can be an “on the loop” system. Once activated, it will select and engage targets. It will pop up an abort button for the human to hit but will fire if the human does not override the robot decision.

Mines are an example “off the loop” weapons. The human cannot abort and is not required to confirm a decision to detonate and kill.

If you take a standard robotics textbook definition of “autonomous” as referring to the ability of a system to function without an external human operator for a protracted period of time, then the oldest “autonomous” weapons are “off the loop”. For example, the Confederates used naval and land mines (known as “torpedoes” at that time) during the American Civil War (1861-65).

Policy autonomy and firing autonomy

Many people employ a more visionary notion of “autonomous”, namely the ability of a future AI to create or discover (i.e. initiate) the policy rules it will execute in its firing decisions via unsupervised machine learning and evolutionary game theory.

We might think of this as the policy loop. This runs before the firing loop of select and engage. Who or what makes the targeting rules is a critical element of control especially as robots, unlike humans, mechanically follow the rules in their programming.

Thus in addition to notions of remote control and humans being in, on and off the loop in firing, one might explore notions of human policy control and humans being in, on and off the loop of policy formation (i.e. initiating the rules that define who, where and how we fight).

Patriot has human policy control. Programmers key targeting rules into the system and on the basis of these rules Patriot selects targets. Thus initiating the targeting rules is an element of control.

The Skynet of Hollywood’s Terminator fiction, by contrast, exemplifies a robot that has no humans in its policy or firing loops.

Some non-military contemporary policy is “human in the loop” in that an AI computer model of climate might make policy recommendations but these can be reviewed and approved by humans.

What Carr was describing as objectionable was a machine that devised its own targeting rules (who, how and where to fight). A robot that follows targeting rules defined or approved by humans is more obviously closer to “meaningful human control” than a robot that initiates rules not subject to human review.

Effective legal control

If some autonomous weapons are to be permitted, it is critical that effective legal control is built into them such that they cannot perpetrate genocide and war crimes. Developing a swarm of cranium bombers to kill civilians is already a war crime and that use is already banned.

It is already the case that fielded autonomous weapons are subject to Article 36 legal review to ensure they can be operated in accordance with International Humanitarian Law.

There will be some exceptional cases where the human is in the policy loop and off the firing loop (e.g. anti-tank mines and naval mines that are long accepted weapons) and cases where battlespace tempo (fast moving enemy objects) require humans on the firing loop not in it once the system is activated (e.g. Phalanx).

Ideally, where battlespace tempo permits, there should be humans in both policy and firing loops. Taking humans out of the policy loop should be comprehensively and pre-emptively banned.

Want to write?

Write an article and join a growing community of more than 180,400 academics and researchers from 4,911 institutions.

Register now