Menu Close
Robots are currently used by police for bomb disposal. Future versions will be much more sophisticated. Nigel Roddis/AAP

Three ways robots can save lives in war

Military robots are not all bad.

Sure, there are risks and downsides of weaponised artifical intelligence (AI), but there are upsides too. Robots offer greater precision in attacks, reduced risk of collateral damage to civilians, and reduced risk of “friendly fire”.

AI weapons are not being developed as weapons of mass destruction. They are being developed as weapons of precise destruction. In the right hands, military AI facilitates ever greater precision and ever greater compliance with international humanitarian law.


Read more: World split on how to regulate 'killer robots'


There are at least three ways that robots can be useful in war zones.

1. Bomb disposal

Bomb disposal robots reduce risk to humans. Mostly remotely operated, they have little autonomy and are used to investigate and defuse or detonate improvised explosive devices.

The Dragon Runner is a radio controlled robot used by bomb disposal teams.

As robots become more dexterous and agile there will come a time when there is no need for a human to be next to a bomb to defuse it.

The robot in The Hurt Locker – a movie based around a bomb disposal unit in Baghdad – was portrayed as pretty useless. But future robots will be able to do everything the humans do in that film, better and quicker.

No one objects to robot bomb disposal.

2. Room-by-room clearing

Room by room clearing is one of the riskier infantry tasks.

In World War II, booby traps were sometimes triggered by pressure sensors under whisky bottles and packets of cigarettes. Human troops entering houses often succumbed to the allure of smokes and booze and were killed as a result.

Today ISIS fighters disguise booby traps as bricks and stones. These are specifically prohibited by international humanitarian law.

In theory, with smaller versions of sensors of the kind used to inspect luggage at airports, robots could perceive the wiring and pressure sensors associated with such booby traps.

Pointman Tactical Robot.

Robots like the Pointman Tactical Robot and the iRobot Negotiator are already capable of entering buildings, climbing stairs and moving over obstacles to search buildings. Future versions are more likely to be armed, have more advanced sensors, hold greater autonomy, and be classified.


Read more: How to make robots that we can trust


More agile humanoid (or animal-like) versions of these robots could be used to clear buildings of booby traps and enemy fighters seeking to ambush troops.

3. Maintaining safety zones

It’s plausible that robots could contribute to maintaining perimeter security in the near future.

Military robot technology could be used to enforce safe havens that protect unarmed civilian refugees from genocides similar to those that happened in Srebrenica and Rwanda, and unlawful bombing as is ongoing in Syria.

Peacekeeping military robots could stop war criminals killing innocent civilians at little or no risk to supervising human peacekeepers.

Much of the technology you could use is already available “off the shelf” from equipment vendors. Surface-to-air missile defence systems that can target missiles, aircraft and artillery shells such as Raytheon’s Patriot and Phalanx have been in production for decades.

Sentry robots such as the Hanwha Techwin (formerly Samsung Techwin) SGR-A1 and the DoDamm SuperAegis II are also currently available and widely fielded.

Should we delegate lethal decisions to machines?

At some point, on some missions, the question of delegating decisions to kill the enemy to autonomous machines has to be faced.

This is actually not a new “Moral Rubicon”. The Confederates crossed this line in the American Civil War. When General Sherman’s men stormed Fort McAllister in 1864, several were killed by “torpedoes” – the name given at the time to anti-personnel landmines.


Read more: Artificial intelligence researchers must learn ethics


Anti-personnel landmines were banned in 1997. At the time though, General Sherman did not treat Major Anderson, the Confederate commander, as a war criminal. In a gesture of gallantry to the defeated foe (considered customary in his day), he entertained him at dinner.

Adding more variables and using computing machinery instead of physical machinery does not change the fundamental moral choice to delegate a targeting decision to a machine.

The justification is simple. If robots can perform a lawful military function more precisely and with less risk of error than humans, then it is arguably right to let them do it.

This is a big technical “if” of course, and it requires robots that have AI knowledge representations of what is legal and moral. This is an active research area called machine ethics.

Machines have achieved superhuman performance in the games Chess, Jeopardy and Go. With sufficient research, superhuman performance in ethics may become possible too.

Robots are a double-edged sword. Used badly, they can perpetrate genocide and war crimes. Used well, they can prevent them.

Want to write?

Write an article and join a growing community of more than 181,800 academics and researchers from 4,938 institutions.

Register now