The debate on autonomous weapons isn't paying enough attention to the technology already in use.
A standard element of international humanitarian law since 1899 should guide countries as they consider banning lethal autonomous weapons systems.
As tensions between the US and Russia escalate, both sides are developing technological capabilities, including artificial intelligence that could be used in conflict.
The unexpected behaviour of even simple bots is only going to get more dramatic as AI scales up.
Treaties banning biological and chemical weapons are in place, and the path is clear to remove nuclear weapons too. Lethal autonomous weapons (killer robots) should be next.
The ethics and psychology of trust suggest ways we might learn to understand self-driving cars, but also show why doing so might be more challenging than we expect.
Rebel fighters in the latest Star Wars movie are helped by a droid that was captured from the enemy and reprogrammed. Could that happen in real life with today's autonomous weapons?
Machines that can target and kill people without human intervention or accountability pose a moral threat to the world.
The future of warfare may include many lethal autonomous weapons, but the world can't decide how, or if, to regulate them.
We need to ban lethal autonomous weapons, or "killer robots", as we have done with biological weapons, land mines and blinding lasers, and Australia should take a leading role in making that happen.
Autonomous submarines might do for naval warfare what drones are doing for air warfare. So should Australia consider autonomous subs as a replacement for the Collins class?
The moral and ethical dilemmas of future warfare are depicted in this tight British thriller. But what will happen when humans become more removed from the weapons of war?
When it comes to weapons with artificial intelligence, there's an argument for keeping a human in charge of some of the action.
Science fiction has long warned of technology taking over the world. We're increasingly connected to a digital world that's growing, and more automated. So what if it starts to evolve?
Is genuine artificial consciousness possible? Should we protect jobs from automation? Your questions on AI and robots answered here.
Arming police drones could lead to less human error and fewer deaths, but it opens up other possibilities that need careful attention.
Some have argued we should not ban but embrace offensive autonomous weapons, or 'killer robots'. But the arguments against a ban are weak.
Why obsess about killer robots of the future, when all the parts are already here, and already in use?
If military robots are inevitable, then AI and robotics researchers should work to make them ethical, not retreat by calling for an ineffectual ban.
The thousands of people who signed an open letter calling for a ban on autonomous killer weapons and robots are misguided. We already have such killing machines and we should embrace them.