Machines that can target and kill people without human intervention or accountability pose a moral threat to the world.
The future of warfare may include many lethal autonomous weapons, but the world can't decide how, or if, to regulate them.
We need to ban lethal autonomous weapons, or "killer robots", as we have done with biological weapons, land mines and blinding lasers, and Australia should take a leading role in making that happen.
When it comes to weapons with artificial intelligence, there's an argument for keeping a human in charge of some of the action.
2015 was a year where we expanded our view of the universe, embraced new technologies and got a hint of the profound changes to come.
There is much debate on the ethics of artificial intelligence machines that are designed to kill. But who's responsible when a non-lethal AI system causes damage, harm or even death?
Some have argued we should not ban but embrace offensive autonomous weapons, or 'killer robots'. But the arguments against a ban are weak.
Why obsess about killer robots of the future, when all the parts are already here, and already in use?
The thousands of people who signed an open letter calling for a ban on autonomous killer weapons and robots are misguided. We already have such killing machines and we should embrace them.
We need to ban offensive autonomous weapons - or 'killer robots' - before a new arms race to produce them begins.