The technology exists to build autonomous weapons. How well they would work and whether they could be adequately controlled are unknown. The Ukraine war has only turned up the pressure.
Killer robots don’t look like this, for now.
Denis Starostin/Shutterstock
The sentient, murderous humanoid robot is a complete fiction, and may never become reality. But that doesn’t mean we’re safe from autonomous weapons – they are already here.
Diverging views on automated weapons systems could make it difficult for Australia and New Zealand to manage military ties at a delicate time in trans-Tasman relations.
Sci-fi nightmares of a robot apocalypse aside, autonomous weapons are a very real threat to humanity. An expert on the weapons explains how the emerging arms race could be humanity’s last.
The term ‘killer robot’ often conjures images of Terminator-like humanoid robots. Militaries around the world are working on autonomous machines that are less scary looking but no less lethal.
John F. Williams/U.S. Navy
Sci-fi nightmares of a robot apocalypse aside, autonomous weapons are a very real threat to humanity. An expert on the weapons explains how the emerging arms race could be humanity’s last.
New Zealanders are worried about autonomous weapons. But military alliances with the US and Australia, and potential economic gains from local robotics research, mean NZ won’t yet take a tough stand.
Like atomic bombs and chemical and biological weapons, deadly drones that make their own decisions must be tightly controlled by an international treaty.
Fictional screen robots have long represented our fear of technology. A new animated family film combines this trepidation with many parents’ fear of losing offline connection with their kids.
Outsourcing use-of-force decisions to machines violates human dignity and is incompatible with international law.
German referee Felix Brych looks at a replay of the video assistant referee (VAR) during the UEFA Nations League semi final soccer match between Portugal and Switzerland, June 2019.
EPA-EFE/Fernando Veludo
Paul Salmon, University of the Sunshine Coast; Peter Hancock, University of Central Florida, and Tony Carden, University of the Sunshine Coast
We’re on the road to developing artificial intelligence systems that will be able to do tasks beyond those they were designed for. But will we be able to control them?
As AI is deployed in society, there is an impact that can be positive or negative. The future is in our hands.
Shutterstock
The Montréal Declaration calls for the responsible development of artificial intelligence. A world expert explains why scientists must choose how their expertise will benefit society.
Lecturer on Law and Associate Director of Armed Conflict and Civilian Protection, International Human Rights Clinic, Harvard Law School, Harvard University
SHARP Professor, leader of the Vitalities Lab, Centre for Social Research in Health and Social Policy Centre, UNSW Sydney, and leader of the UNSW Node of the ARC Centre of Excellence for Automated Decision-Making and Society, UNSW Sydney