AI is going to fundamentally transform how nations wage far. By failing to address it, the defence review leaves Australia unprepared for the future of war.
The technology exists to build autonomous weapons. How well they would work and whether they could be adequately controlled are unknown. The Ukraine war has only turned up the pressure.
The sentient, murderous humanoid robot is a complete fiction, and may never become reality. But that doesn’t mean we’re safe from autonomous weapons – they are already here.
As tensions between the US and Russia escalate, both sides are developing technological capabilities, including artificial intelligence that could be used in conflict.
The ethics and psychology of trust suggest ways we might learn to understand self-driving cars, but also show why doing so might be more challenging than we expect.
Rebel fighters in the latest Star Wars movie are helped by a droid that was captured from the enemy and reprogrammed. Could that happen in real life with today’s autonomous weapons?
We need to ban lethal autonomous weapons, or “killer robots”, as we have done with biological weapons, land mines and blinding lasers, and Australia should take a leading role in making that happen.
The thousands of people who signed an open letter calling for a ban on autonomous killer weapons and robots are misguided. We already have such killing machines and we should embrace them.
Lecturer on Law and Associate Director of Armed Conflict and Civilian Protection, International Human Rights Clinic, Harvard Law School, Harvard University