Menu Close

Lethal autonomous robots: who’s really in control?

The state is still in control, it’s when drones and robots develop their own interests that Terminator becomes a true story. AAP / Alan Porritt

Anxiety about lethal autonomous robots has some substance. The state of play as currently constituted, however, already provides enough cause for concern. The Terminator scenario Monash associate professor Robert Sparrow evokes in his recent article - in which the machines decide humanity is no longer useful - is a long way from reality.

The Terminator scenario is not when robots use algorithms and rules of engagement: in that case human beings are still in charge. It is when robots are able to conceive an idea in their own interest. Sparrow says of drones:

…a human being has confronted the question of whether the goals the attack is intended to serve are worth killing them for.

Robots are no different, in essence, since a human has decided what pattern of behaviour should attract attack.

But states have always striven to systematically kill at a distance. Only the means are changing. The debate raises some interesting questions about the ultimate purposes of strategy and warfare. States interests, unlike robots, are already built in.

International theorist David Copp suggests something similar. He enters into a debate about whether states can have “normative autonomy”. This is against the idea of “agency individualism”, which suggests that only individual human beings can have a moral sense, and plan and act on it. Copp’s work suggests states can also have a moral purpose independent of their individual members.

Whether steering a drone or writing algorithms for a robot, the person who decides to strike is not a single human being. If someone in Peoria decides a terrorist should die, nothing happens. If a state committee decides it, the person dies, but some members of the committee may not necessarily agree with this decision.

Unmanned drones have no emotional purpose - only a task they’ve been programmed to fulfil. AAP/Northrop Grumman

The differences between drones and robots can be explained thus: a drone has no motive in either sense of the word. It requires a driver. Delivering a drone strike is no different from throwing a javelin.

But robots are different. They can be assigned a complex task and then provided with code, allowing them to detect certain patterns and react in certain ways when they do.

Let’s say you want to kill a cat. The cat is in Schrödinger’s house, but you’re not sure where. You could steer a drone by remote control, and watch through its camera as you thumbed the controller and manipulated it to find and kill the cat - assuming the drone had a lethal weapon attached.

Or you could send a robot, programmed with cat-recognition software, into the room. You would have to program the robot not only to recognise cats, but to kill them, and to avoid killing Schrödinger himself. This resembles the ‘signature strike’ scenario, currently in operation, where strikes are carried out against people who are following a signature pattern of behaviour.

The robot has no purpose, only a task. You have instructed it to find and kill the cat. It doesn’t know why. It goes about its business “without regard to the meaning content” of its assignment. You may be a felinophobe; there may be a cat plague or fleas: ultimately, the robot doesn’t care. It recognises a signature, a pattern, and kills it.

Neither drones nor robots have a moral sense of their own - they have no interests and no values. They do not desire righteousness, oil or global domination. They cannot imagine alternative futures. They are, in the jargon, affectless: without feelings. If they possessed any of these things, we would be sad when they died, or they would suffer psychological trauma like drone pilots. We would assign them rights. This would defeat their utility for war.

Realist theorists of international relations say self-defence has always been the highest value of states. However, in his book The Moral Purpose of the State, Australian constructivist Christian Reus-Smit argues that international systems are built around common ideas about “right living”.

Copp describes the process by which one state, Britain, whose decision-making apparatus was based in London, planned for and executed a campaign to retake the Falkland (Malvinas) Islands, and sank the Argentine cruiser Belgrano. Why? Surely no-one would be foolish enough to lead their country into an lavishly expensive war for the sake of a bunch of rocks in the Atlantic.

To say this,though is to miss the role of emotion and moral purpose in policy-making. States are not made of people: they’re made by people, who then occupy the offices they’ve created. “Britain” was not defending its interests, it was defending its honour. In this case, exemplifying the “permanent conflicts between logical and emotional thinking”, emotion won the day. The state’s emotion, that is.

As other researchers have pointed out, “it is possible to conceive of agency beyond the human”. The crucial threshold for robots is when they write the code, but that presumes interests or values beyond self-defence. The machines are a long way yet from honour. At the moment states decide, but as Machiavelli pointed out in The Prince, states have their own sense of honour and their own moral code. And no-one is fully in control of them.

Want to write?

Write an article and join a growing community of more than 182,700 academics and researchers from 4,947 institutions.

Register now