tag:theconversation.com,2011:/uk/topics/lethal-autonomous-robots-5786/articlesLethal autonomous robots – The Conversation2023-04-28T04:33:57Ztag:theconversation.com,2011:article/2046192023-04-28T04:33:57Z2023-04-28T04:33:57ZThe defence review fails to address the third revolution in warfare: artificial intelligence<figure><img src="https://images.theconversation.com/files/523372/original/file-20230428-15-gy0qd6.jpeg?ixlib=rb-1.1.0&rect=102%2C56%2C7498%2C4102&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Throughout history, war has been irrevocably changed by the advent of new technologies. Historians of war have identified several technological revolutions.</p>
<p>The first was the <a href="https://www.brown.edu/Departments/Joukowsky_Institute/courses/13things/7687.html">invention of gunpowder</a> by people in ancient China. It gave us muskets, rifles, machine guns and, eventually, all manner of explosive ordnance. It’s uncontroversial to claim gunpowder completely transformed how we fought war. </p>
<p>Then came the invention of the nuclear bomb, raising the stakes higher than ever. Wars could be ended with just a single weapon, and life as we know it could be ended by a single nuclear stockpile.</p>
<p>And now, war has – like so many other aspects of life – entered the age of automation. AI will cut through the “fog of war”, transforming where and how we fight. Small, cheap and increasingly capable uncrewed systems will replace large, expensive, crewed weapon platforms.</p>
<p>We’ve seen the beginnings of this in Ukraine, where sophisticated armed home-made drones <a href="https://www.bbc.com/news/technology-65389215">are being developed</a>, where Russia is <a href="https://www.hrw.org/news/2022/03/29/ukraine-russia-uses-banned-antipersonnel-landmines">using AI “smart” mines</a> that explode when they detect footsteps nearby, and where Ukraine successfully used autonomous “drone” boats in a major attack on the <a href="https://theconversation.com/ukraine-how-uncrewed-boats-are-changing-the-way-wars-are-fought-at-sea-201606">Russian navy at Sevastopol</a>.</p>
<p>We also see this revolution occurring in our own forces in Australia. And all of this raises the question: why has the government’s recent defence strategic review failed to seriously consider the implications of AI-enabled warfare?</p>
<h2>AI has crept into Australia’s military</h2>
<p>Australia already has a range of autonomous weapons and vessels that can be deployed in conflict. </p>
<p>Our air force expects to acquire a number of 12 metre-long uncrewed <a href="https://www.airforce.gov.au/our-work/projects-and-programs/ghost-bat">Ghost Bat</a> aircraft to ensure our very expensive F-35 <a href="https://www.airforce.gov.au/our-work/projects-and-programs/ghost-bat">fighter jets</a> aren’t made sitting ducks by advancing technologies. </p>
<p>On the sea, the defence force has been testing a new type of uncrewed surveillance vessel called <a href="https://www.minister.defence.gov.au/media-releases/2023-03-06/first-ocius-bluebottle-uncrewed-surface-vessels-adf">the Bluebottle</a>, developed by local company Ocius. And under the sea, Australia is building a prototype six metre-long Ghost Shark <a href="https://www.defence.gov.au/news-events/news/2022-12-14/ghost-shark-stealthy-game-changer">uncrewed submarine</a>. </p>
<p>It also looks set to be developing many more technologies like this in the future. The government’s <a href="https://www.theaustralian.com.au/nation/defence/3bn-accelerator-puts-war-hitech-on-fast-track/news-story/4b4cabf8e40b37ef687d30ce3ea121d0">just announced A$3.4 billion defence innovation “accelerator”</a> will aim to get cutting-edge military technologies, including hypersonic missiles, directed energy weapons and autonomous vehicles, into service sooner.</p>
<p>How then do AI and autonomy fit into our larger strategic picture?</p>
<p>The recent defence strategy review is the latest analysis of whether Australia has the necessary defence capability, posture and preparedness to defend its interests through the next decade and beyond. You’d expect AI and autonomy would be a significant concern – especially since the review recommends <a href="https://www.afr.com/politics/federal/defence-rejig-costs-budget-19b-and-rising-20230424-p5d2qw">spending a not insignificant A$19 billion</a> over the next four years. </p>
<p>Yet the review mentions autonomy only twice (both times in the context of existing weapons systems) and AI once (as one of the four pillars of the AUKUS submarine program). </p>
<h2>Countries are preparing for the third revolution</h2>
<p>Around the world, major powers have made it clear they consider AI a central component of the planet’s military future. </p>
<p>The House of Lords in the United Kingdom is holding a <a href="https://committees.parliament.uk/committee/646/ai-in-weapon-systems-committee/">public inquiry</a> into the use of AI in weapons systems. In Luxembourg, the government just hosted an <a href="https://www.laws-conference.lu/">important conference</a> on autonomous weapons. And China has announced its intention to become the world leader in AI by 2030. Its New Generation AI Development Plan <a href="https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/">proclaims</a> “AI is a strategic technology that will lead the future”, both in a military and economic sense.</p>
<p>Similarly, Russian President Vladimir Putin has <a href="https://edition.cnn.com/2017/09/01/world/putin-artificial-intelligence-will-rule-world/index.html">declared that</a> “whoever becomes the leader in this sphere will become ruler of the world” – while the United States has <a href="https://usacac.army.mil/sites/default/files/publications/17855.pdf">adopted a</a> “third offset strategy” that will invest heavily in AI, autonomy and robotics. </p>
<p>Unless we give more focus to AI in our military strategy, we risk being left fighting wars with outdated technologies. Russia saw the painful consequences of this last year, when its missile cruiser Moscova, the flagship of the Black Sea fleet, <a href="https://www.bbc.com/news/world-europe-61103927">was sunk</a> after being distracted by a drone. </p>
<h2>Future regulation</h2>
<p>Many people (including myself) hope autonomous weapons will soon be regulated. I was invited as an expert witness to an intergovernmental <a href="https://www.amnesty.org/en/latest/news/2023/02/more-than-30-countries-call-for-international-legal-controls-on-killer-robots/">meeting in Costa Rica</a> earlier this year, where 30 Latin and Central American nations called for regulation – many for the first time. </p>
<p>Regulation will hopefully ensure meaningful human control is maintained over autonomous weapon systems (although we’re yet to agree on what “meaningful control” will look like).</p>
<p>But regulation won’t make AI go away. We can still expect to see AI, and some levels of autonomy, as vital components in our defence in the near future.</p>
<p>There are instances, such as in minefield clearing, where autonomy is highly desirable. Indeed, AI will be very useful in managing the information space and in military logistics (where its use won’t be subject to the ethical challenges posed in other settings, such as when using lethal autonomous weapons).</p>
<p>At the same time, autonomy will create strategic challenges. For instance, it will change the geopolitical order alongside lowering costs and scaling forces. Turkey is, for example, becoming a <a href="https://www.aspistrategist.org.au/has-turkey-become-an-armed-drone-superpower/">major drone superpower</a>. </p>
<h2>We need to prepare</h2>
<p>Australia needs to consider how it might defend itself in an AI-enabled world, where terrorists or rogue states can launch swarms of drones against us – and where it might be impossible to determine the attacker. A review that ignores all of this leaves us woefully unprepared for the future. </p>
<p>We also need to engage more constructively in ongoing diplomatic discussions about the use of AI in warfare. Sometimes the best defence is to be found in the political arena, and not the military one.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/bet-youre-on-the-list-how-criticising-smart-weapons-got-me-banned-from-russia-185399">'Bet you're on the list': how criticising 'smart weapons' got me banned from Russia</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/204619/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh receives funding from the Australian Research Council as an ARC Laureate Fellow. He has been banned indefinitely from Russia for his outspoken criticism of Russia's use of AI weapons in Ukraine. </span></em></p>AI is going to fundamentally transform how nations wage far. By failing to address it, the defence review leaves Australia unprepared for the future of war.Toby Walsh, Professor of AI, Research Group Leader, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1987252023-02-21T13:24:17Z2023-02-21T13:24:17ZWar in Ukraine accelerates global drive toward killer robots<figure><img src="https://images.theconversation.com/files/510915/original/file-20230217-593-z3je8t.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4021%2C2924&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It wouldn't take much to turn this remotely operated mobile machine gun into an autonomous killer robot.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Marine_Corps_Warfighting_Laboratory_MAGTAF_Integrated_Experiment_(MCWL)_160709-M-OB268-165.jpg">Pfc. Rhita Daniel, U.S. Marine Corps</a></span></figcaption></figure><p>The U.S. military is intensifying its commitment to the development and use of autonomous weapons, as confirmed by an update to a <a href="https://www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/">Department of Defense directive</a>. The update, released Jan. 25, 2023, is the first in a decade to focus on artificial intelligence autonomous weapons. It follows a related <a href="https://www.nato.int/cps/en/natohq/official_texts_208376.htm">implementation plan</a> released by NATO on Oct. 13, 2022, that is aimed at preserving the alliance’s “technological edge” in what are sometimes called “killer robots.” </p>
<p>Both announcements reflect a crucial lesson militaries around the world have learned from recent combat operations in <a href="https://www.pbs.org/newshour/world/drone-advances-amid-war-in-ukraine-could-bring-fighting-robots-to-front-lines#:%7E:text=Utah%2Dbased%20Fortem%20Technologies%20has,them%20%E2%80%94%20all%20without%20human%20assistance.">Ukraine</a> and <a href="https://foreignpolicy.com/2021/03/30/army-pentagon-nagorno-karabakh-drones/">Nagorno-Karabakh</a>: Weaponized artificial intelligence is the future of warfare.</p>
<p>“We know that commanders are seeing a military value in loitering munitions in Ukraine,” Richard Moyes, director of <a href="https://article36.org/">Article 36</a>, a humanitarian organization focused on reducing harm from weapons, told me in an interview. These weapons, which are a cross between a bomb and a drone, can hover for extended periods while waiting for a target. For now, such semi-autonomous missiles are generally being operated with significant human control over key decisions, he said. </p>
<h2>Pressure of war</h2>
<p>But as casualties mount in Ukraine, so does the pressure to achieve decisive battlefield advantages with fully autonomous weapons – robots that can choose, hunt down and attack their targets all on their own, without needing any human supervision.</p>
<p>This month, a key Russian manufacturer <a href="https://www.defenseone.com/technology/2023/01/russian-robot-maker-working-bot-target-abrams-leopard-tanks/382288/">announced plans</a> to develop a new combat version of its Marker reconnaissance robot, an uncrewed ground vehicle, to augment existing forces in Ukraine. Fully autonomous drones are already being used to <a href="https://www.forbes.com/sites/katyasoldak/2023/01/27/friday-january-27-russias-war-on-ukraine-daily-news-and-information-from-ukraine/">defend Ukrainian energy facilities</a> from other drones. Wahid Nawabi, CEO of the U.S. defense contractor that manufactures the semi-autonomous <a href="https://www.avinc.com/tms/switchblade">Switchblade drone</a>, said the technology is <a href="https://apnews.com/article/russia-ukraine-war-drone-advances-6591dc69a4bf2081dcdd265e1c986203">already within reach</a> to convert these weapons to become fully autonomous. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1446461845070549008"}"></div></p>
<p>Mykhailo Fedorov, Ukraine’s digital transformation minister, has argued that fully autonomous weapons are the war’s “<a href="https://abcnews.go.com/Technology/wireStory/drone-advances-ukraine-bring-dawn-killer-robots-96112651">logical and inevitable next step</a>” and recently said that soldiers might see them on the battlefield in the next six months. </p>
<p>Proponents of fully autonomous weapons systems <a href="https://news.northeastern.edu/2019/11/15/autonomous-weapons-systems-the-utilize-artificial-intelligence-are-changing-the-nature-of-warfare-but-theres-a-problem/#_ga=2.7414138.976428111.1676666580-169995920.1676666580">argue that the technology will keep soldiers out of harm’s way</a> by keeping them off the battlefield. They will also allow for military decisions to be made at superhuman speed, allowing for radically improved defensive capabilities. </p>
<p>Currently, semi-autonomous weapons, like loitering munitions that track and detonate themselves on targets, require a “human in the loop.” They can recommend actions but require their operators to initiate them. </p>
<p>By contrast, fully autonomous drones, like the so-called “<a href="https://fortemtech.com/products/dronehunter-f700/">drone hunters</a>” now <a href="https://u24.gov.ua/news/shahed_hunters_defenders">deployed in Ukraine</a>, can track and disable incoming unmanned aerial vehicles day and night, with no need for operator intervention and faster than human-controlled weapons systems. </p>
<h2>Calling for a timeout</h2>
<p>Critics like <a href="https://www.stopkillerrobots.org/stop-killer-robots/facts-about-autonomous-weapons/">The Campaign to Stop Killer Robots</a> have been advocating for more than a decade to ban research and development of autonomous weapons systems. They point to a future where autonomous weapons systems are designed specifically to target humans, not just vehicles, infrastructure and other weapons. They argue that wartime decisions over life and death must remain in human hands. Turning them over to an algorithm amounts to the ultimate form of <a href="https://www.stopkillerrobots.org/stop-killer-robots/digital-dehumanisation/">digital dehumanization</a>.</p>
<p>Together with <a href="https://www.hrw.org/topic/arms/killer-robots">Human Rights Watch</a>, The Campaign to Stop Killer Robots argues that autonomous weapons systems lack the human judgment necessary to distinguish between civilians and legitimate military targets. They also lower the threshold to war by reducing the perceived risks, and they erode meaningful human control over what happens on the battlefield. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a soldier crouches on the ground peering into a black box as to small projectiles with wings are launched from tubes on either side of him" src="https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=662&fit=crop&dpr=1 600w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=662&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=662&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=831&fit=crop&dpr=1 754w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=831&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=831&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This composite image shows a ‘Switchblade’ loitering munition drone launching from a tube and extending its folded wings.</span>
<span class="attribution"><a class="source" href="https://madsciblog.tradoc.army.mil/wp-content/uploads/2021/06/Switchblade.jpg">U.S. Army AMRDEC Public Affairs</a></span>
</figcaption>
</figure>
<p>The organizations argue that the militaries <a href="https://research.northeastern.edu/autonomous-weapons-systems-the-utilize-artificial-intelligence-are-changing-the-nature-of-warfare-but-theres-a-problem-2/#:%7E:text=They%20found%20that%20there%20are,dollars%20into%20this%20arms%20race.">investing most heavily</a> in autonomous weapons systems, including the U.S., Russia, China, South Korea and the European Union, are launching the world into a costly and destabilizing new arms race. One consequence could be this dangerous new technology falling into the <a href="https://www.brookings.edu/wp-content/uploads/2021/11/FP_20211122_ai_nonstate_actors_kreps.pdf">hands of terrorists and others outside of government control</a>.</p>
<p>The updated Department of Defense directive tries to address some of the key concerns. It declares that the U.S. will use autonomous weapons systems with “<a href="https://www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/">appropriate levels of human judgment over the use of force</a>.” Human Rights Watch <a href="https://www.hrw.org/news/2023/02/14/review-2023-us-policy-autonomy-weapons-systems">issued a statement</a> saying that the new directive fails to make clear what the phrase “appropriate level” means and doesn’t establish guidelines for who should determine it.</p>
<p>But as Gregory Allen, an expert from the national defense and international relations think tank <a href="https://www.csis.org/">Center for Strategic and International Studies</a>, argues, this language <a href="https://www.forbes.com/sites/davidhambling/2023/01/31/what-is-the-pentagons-updated-policy-on-killer-robots/">establishes a lower threshold</a> than the “meaningful human control” demanded by critics. The Defense Department’s wording, he points out, allows for the possibility that in certain cases, such as with surveillance aircraft, the level of human control considered appropriate “may be little to none.” </p>
<p>The updated directive also includes language promising ethical use of autonomous weapons systems, specifically by establishing a system of oversight for developing and employing the technology, and by insisting that the weapons will be used in accordance with existing international laws of war. But Article 36’s Moyes noted that international law currently does not provide an adequate framework for understanding, much less regulating, the concept of weapon autonomy. </p>
<p>The current legal framework does not make it clear, for instance, that commanders are responsible for understanding what will trigger the systems that they use, or that they must limit the area and time over which those systems will operate. “The danger is that there is not a bright line between where we are now and where we have accepted the unacceptable,” said Moyes.</p>
<h2>Impossible balance?</h2>
<p>The Pentagon’s update demonstrates a simultaneous commitment to deploying autonomous weapons systems and to complying with international humanitarian law. How the U.S. will balance these commitments, and if such a balance is even possible, remains to be seen. </p>
<p>The International Committee of the Red Cross, the custodian of international humanitarian law, insists that the legal obligations of commanders and operators “<a href="https://www.icrc.org/en/document/reflections-70-years-geneva-conventions-and-challenges-ahead">cannot be transferred to a machine, algorithm or weapon system</a>.” Right now, human beings are held responsible for protecting civilians and limiting combat damage by making sure the use of force is proportional to military objectives.</p>
<p>If and when artificially intelligent weapons are deployed on the battlefield, who should be held responsible when needless civilian deaths occur? There isn’t a clear answer to that very important question.</p><img src="https://counter.theconversation.com/content/198725/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>I am not connected to Article 36 in any capacity, nor have I received any funding from them. I did write a short opinion/policy piece on AWS that was posted on their website.</span></em></p>The technology exists to build autonomous weapons. How well they would work and whether they could be adequately controlled are unknown. The Ukraine war has only turned up the pressure.James Dawes, Professor of English, Macalester CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1921702022-10-16T19:02:23Z2022-10-16T19:02:23Z‘Killer robots’ will be nothing like the movies show – here’s where the real threats lie<figure><img src="https://images.theconversation.com/files/489521/original/file-20221013-12-lm966h.jpg?ixlib=rb-1.1.0&rect=143%2C201%2C1386%2C862&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Ghost Robotics Vision 60 Q-UGV.</span> <span class="attribution"><a class="source" href="https://www.dvidshub.net/image/7351259/ghost-robotics-vision-60-q-ugv-demo">US Space Force photo by Senior Airman Samuel Becker</a></span></figcaption></figure><p>You might suppose Hollywood is good at predicting the future. Indeed, Robert Wallace, head of the CIA’s Office of Technical Service and the US equivalent of MI6’s fictional Q, has recounted how Russian spies <a href="https://www.popularmechanics.com/military/a12043/4267549/">would watch the latest Bond movie</a> to see what technologies might be coming their way.</p>
<p>Hollywood’s continuing obsession with killer robots might therefore be of significant concern. The newest such movie is Apple TV’s forthcoming <a href="https://www.thewrap.com/florence-pugh-dolly-movie-murderous-sex-robot-apple-tv-plus/">sex robot courtroom drama Dolly</a>.</p>
<p>I never thought I’d write the phrase “sex robot courtroom drama”, but there you go. Based on a <a href="https://apex-magazine.com/short-fiction/dolly/">2011 short story</a> by Elizabeth Bear, the plot concerns a billionaire killed by a sex robot that then asks for a lawyer to defend its murderous actions.</p>
<h2>The real killer robots</h2>
<p>Dolly is the latest in a long line of movies featuring killer robots – including HAL in Kubrick’s 2001: A Space Odyssey, and Arnold Schwarzenegger’s T-800 robot in the Terminator series. Indeed, conflict between robots and humans was at the centre of the very first feature-length science fiction film, Fritz Lang’s 1927 classic <a href="https://www.britannica.com/topic/Metropolis-film-1927">Metropolis</a>.</p>
<p>But almost all these movies get it wrong. Killer robots won’t be sentient humanoid robots with evil intent. This might make for a dramatic storyline and a box office success, but such technologies are many decades, if not centuries, away.</p>
<p>Indeed, contrary to recent fears, robots may never be sentient.</p>
<p>It’s much simpler technologies we should be worrying about. And these technologies are starting to turn up on the battlefield today in places like Ukraine and <a href="https://www.militarystrategymagazine.com/article/drones-in-the-nagorno-karabakh-war-analyzing-the-data/">Nagorno-Karabakh</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/drones-over-ukraine-fears-of-russian-killer-robots-have-failed-to-materialise-180244">Drones over Ukraine: fears of Russian 'killer robots' have failed to materialise</a>
</strong>
</em>
</p>
<hr>
<h2>A war transformed</h2>
<p>Movies that feature much simpler armed drones, like Angel has Fallen (2019) and Eye in the Sky (2015), paint perhaps the most accurate picture of <a href="https://theconversation.com/eye-in-the-sky-movie-gives-a-real-insight-into-the-future-of-warfare-56684">the real future of killer robots</a>. </p>
<p>On the nightly TV news, we see how modern warfare is being transformed by ever-more autonomous drones, tanks, ships and submarines. These robots are only a little more sophisticated than those you can buy in your local hobby store. </p>
<p>And increasingly, the decisions to identify, track and destroy targets are being handed over to their algorithms. </p>
<p>This is taking the world to a dangerous place, with a host of moral, legal and technical problems. Such weapons will, for example, further upset our troubled geopolitical situation. We already see <a href="https://www.forbes.com/sites/amirhusain/2022/06/30/turkey-builds-a-hyperwar-capable-military/?sh=1500c4b855e1">Turkey emerging as a major drone power</a>.</p>
<p>And such weapons cross a moral red line into a terrible and terrifying world where unaccountable machines decide who lives and who dies. </p>
<p>Robot manufacturers are, however, starting to push back against this future.</p>
<h2>A pledge not to weaponise</h2>
<p>Last week, six leading robotics companies pledged they would <a href="https://www.theguardian.com/technology/2022/oct/07/killer-robots-companies-pledge-no-weapons">never weaponise their robot platforms</a>. The companies include Boston Dynamics, which makes the Atlas humanoid robot, which can <a href="https://youtu.be/knoOXBLFQ-s">perform an impressive backflip</a>, and the Spot robot dog, which looks like it’s <a href="https://youtu.be/wlkCQXHEgjA">straight out of the Black Mirror TV series</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1578400002056953858"}"></div></p>
<p>This isn’t the first time robotics companies have spoken out about this worrying future. Five years ago, I organised <a href="https://newsroom.unsw.edu.au/news/science-tech/world%E2%80%99s-tech-leaders-urge-un-ban-killer-robots">an open letter</a> signed by Elon Musk and more than 100 founders of other AI and robot companies calling for the United Nations to regulate the use of killer robots. The letter even knocked the Pope into third place for a <a href="https://newsroom.unsw.edu.au/news/science-tech/unsws-toby-walsh-voted-runner-global-award">global disarmament award</a>.</p>
<p>However, the fact that leading robotics companies are pledging not to weaponise their robot platforms is more virtue signalling than anything else.</p>
<p>We have, for example, already seen <a href="https://www.vice.com/en/article/m7gv33/robot-dog-not-so-cute-with-submachine-gun-strapped-to-its-back">third parties mount guns</a> on clones of Boston Dynamics’ Spot robot dog. And such modified robots have proven effective in action. Iran’s top nuclear scientist was <a href="https://www.nytimes.com/2021/09/18/world/middleeast/iran-nuclear-fakhrizadeh-assassination-israel.html">assassinated by Israeli agents</a> using a robot machine gun in 2020.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lethal-autonomous-weapons-and-world-war-iii-its-not-too-late-to-stop-the-rise-of-killer-robots-165822">Lethal autonomous weapons and World War III: it's not too late to stop the rise of 'killer robots'</a>
</strong>
</em>
</p>
<hr>
<h2>Collective action to safeguard our future</h2>
<p>The only way we can safeguard against this terrifying future is if nations collectively take action, as they have with chemical weapons, biological weapons and even nuclear weapons.</p>
<p>Such regulation won’t be perfect, just as the regulation of chemical weapons isn’t perfect. But it will prevent arms companies from openly selling such weapons and thus their proliferation. </p>
<p>Therefore, it’s even more important than a pledge from robotics companies to see the UN Human Rights council <a href="https://www.ohchr.org/en/news/2022/10/human-rights-council-adopts-six-resolutions-appoints-special-rapporteur-situation">has recently unanimously decided</a> to explore the human rights implications of new and emerging technologies like autonomous weapons. </p>
<p>Several dozen nations have already called for the UN to regulate killer robots. The European Parliament, the African Union, the UN Secretary General, Nobel peace laureates, church leaders, politicians and thousands of AI and robotics researchers like myself have all called for regulation. </p>
<p>Australian is not a country that has, so far, supported these calls. But if you want to avoid this Hollywood future, you may want to take it up with your political representative next time you see them.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/new-zealand-could-take-a-global-lead-in-controlling-the-development-of-killer-robots-so-why-isnt-it-166168">New Zealand could take a global lead in controlling the development of 'killer robots' — so why isn't it?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/192170/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The sentient, murderous humanoid robot is a complete fiction, and may never become reality. But that doesn’t mean we’re safe from autonomous weapons – they are already here.Toby Walsh, Professor of AI at UNSW, Research Group Leader, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1027362018-09-06T13:19:17Z2018-09-06T13:19:17ZAI has already been weaponised – and it shows why we should ban ‘killer robots’<figure><img src="https://images.theconversation.com/files/235215/original/file-20180906-190636-aogrro.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/unmanned-air-uav-spy-above-enemy-26952160?src=-ZOKXFCzFXCQZUjYk5R16g-1-16">Oleg Yarko/Shutterstock</a></span></figcaption></figure><p>A dividing line is emerging in the debate over so-called killer robots. Many countries want to see new international law on autonomous weapon systems that can target and kill people without human intervention. But those countries already developing such weapons are instead trying to highlight their supposed benefits.</p>
<p>I witnessed this growing gulf at a recent UN meeting of more than 70 countries <a href="https://www.unog.ch/80256EE600585943/(httpPages)/7C335E71DFCB29D1C1258243003E8724?OpenDocument">in Geneva</a>, where those in favour of autonomous weapons, including the US, Australia and South Korea, were more vocal than ever. At the meeting, <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/D1A2BA4B7B71D29FC12582F6004386EF/$file/2018_GGE+LAWS_August_Working+Paper_US.pdf">the US claimed</a> that such weapons could actually make it easier to follow international humanitarian law by making military action more precise.</p>
<p>Yet it’s highly speculative to say that “killer robots” will ever be able to follow humanitarian law at all. And while politicians continue to argue about this, the spread of autonomy and artificial intelligence in existing military technology is already effectively <a href="https://www.cambridge.org/core/journals/review-of-international-studies/article/autonomous-weapons-systems-and-changing-norms-in-international-relations/8E8CC29419AF2EF403EA02ACACFCF223">setting undesirable standards</a> for its role in the use of force.</p>
<p>A series of <a href="https://futureoflife.org/open-letter-autonomous-weapons/">open letters</a> by prominent researchers speaking out against weaponising artificial intelligence have helped bring the debate about autonomous military systems to public attention. The problem is that the debate is framed as if this technology is something from the future. In fact, the questions it raises are effectively already being addressed by existing systems.</p>
<p>Most air defence systems <a href="https://www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf">already have</a> significant autonomy in the targeting process, and military aircraft have highly automated features. This means “robots” are already involved in identifying and engaging targets.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Humans still press the trigger, but for how long?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/female-military-drone-operator-wide-shot-539931541?src=eQqZybPxaHhkvow-YSqfIA-1-1">Burlingham/Shutterstock</a></span>
</figcaption>
</figure>
<p>Meanwhile, another important question raised by current technology is missing from the ongoing discussion. Remotely operated drones are currently used by several countries’ militaries to drop bombs on targets. But we know from incidents <a href="https://www.law.columbia.edu/sites/default/files/microsites/human-rights-institute/files/The%20Civilian%20Impact%20of%20Drones.pdf">in Afghanistan and elsewhere</a> that drone images aren’t enough to clearly distinguish between civilians and combatants. We also know that current AI technology can contain significant bias that effects its decision making, often with <a href="http://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai/">harmful effects</a>. </p>
<p>As future fully autonomous aircraft are likely to be used in similar ways to drones, they will probably follow the practices laid out by drones. Yet states using existing autonomous technologies are excluding them from the wider debate by referring to them as “semi-autonomous” or so-called “legacy systems”. Again, this makes the issue of “killer robots” seem more futuristic than it really is. This also prevents the international community from taking a closer look at whether these systems are fundamentally appropriate under humanitarian law.</p>
<p>Several key principles of international humanitarian law require deliberate human judgements that machines <a href="https://thebulletin.org/landing_article/why-the-world-needs-to-regulate-autonomous-weapons-and-soon/">are incapable of</a>. For example, the legal definition of who is a civilian and who is a combatant isn’t written in a way that could be programmed into AI, and <a href="https://www.tandfonline.com/doi/abs/10.1080/15027570.2010.537903">machines lack</a> the situational awareness and ability to infer things necessary to make this decision.</p>
<h2>Invisible decision making</h2>
<p>More profoundly, the more that targets are chosen and potentially attacked by machines, the less we know about how those decisions are made. Drones <a href="https://www.theguardian.com/science/the-lay-scientist/2016/feb/18/has-a-rampaging-ai-algorithm-really-killed-thousands-in-pakistan">already rely heavily</a> on intelligence data processed by “black box” algorithms that are very difficult to understand to choose their proposed targets. This <a href="http://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/">makes it harder</a> for the human operators who actually press the trigger to question target proposals.</p>
<p>As the UN continues to debate this issue, it’s worth noting that most countries in favour of banning autonomous weapons are developing countries, which are typically <a href="http://www.article36.org/wp-content/uploads/2016/04/A36-Disarm-Dev-Marginalisation.pdf">less likely</a> to attend international disarmament talks. So the fact that they are willing to speak out strongly against autonomous weapons makes their doing so all the more significant. Their history of experiencing interventions and invasions from richer, more powerful countries (such as some of the ones in favour of autonomous weapons) also reminds us that they are most at risk from this technology.</p>
<p>Given what we know about existing autonomous systems, we should be very concerned that “killer robots” will make breaches of humanitarian law more, not less, likely. This threat can only be prevented by negotiating new international law curbing their use.</p><img src="https://counter.theconversation.com/content/102736/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ingvild Bode receives funding from the Joseph Rowntree Charitable Trust. </span></em></p>The debate on autonomous weapons isn’t paying enough attention to the technology already in use.Ingvild Bode, Senior Lecturer in International Relations, University of KentLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1014272018-08-21T10:32:05Z2018-08-21T10:32:05ZBan ‘killer robots’ to protect fundamental moral and legal principles<figure><img src="https://images.theconversation.com/files/232107/original/file-20180815-2909-5xtnkd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The U.S. military is already testing a Modular Advanced Armed Robotic System.</span> <span class="attribution"><a class="source" href="https://www.marforpac.marines.mil/Exercises/RIMPAC/RIMPAC-Photos/igphoto/2001572635/">Lance Cpl. Julien Rodarte, U.S. Marine Corps</a></span></figcaption></figure><p>When drafting a <a href="https://www.britannica.com/event/Hague-Conventions">treaty on the laws of war</a> at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language. </p>
<p>This standard, known as the <a href="https://www.icrc.org/eng/resources/documents/article/other/57jnhy.htm">Martens Clause</a>, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”</p>
<p>I was the lead author of a <a href="https://www.hrw.org/node/321376">new report</a> by <a href="https://www.hrw.org/">Human Rights Watch</a> and the <a href="http://hrp.law.harvard.edu/">Harvard Law School International Human Rights Clinic</a> that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these <a href="https://theconversation.com/losing-control-the-dangers-of-killer-robots-58262">weapons</a>.</p>
<p>Representatives of more than 70 nations will gather from August 27 to 31 at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the <a href="https://www.unog.ch/80256EE600585943/(httpPages)/4F0DEF093B4860B4C1257180004B1B30?OpenDocument">Convention on Conventional Weapons</a>, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.</p>
<h2>Making rules for the unknowable</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=712&fit=crop&dpr=1 600w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=712&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=712&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=895&fit=crop&dpr=1 754w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=895&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=895&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Russian diplomat Fyodor Fyodorovich Martens, for whom the Martens Clause is named.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Friedrich_Fromhold_Martens_(1845-1909).jpg">Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.</p>
<p>Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are <a href="https://www.stopkillerrobots.org/wp-content/uploads/2018/03/KRC_Briefing_CCWApr2018.pdf">all working to develop</a> them. They argue that the technology would process information faster and keep soldiers off the battlefield.</p>
<p>The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.</p>
<h2>Principles of humanity</h2>
<p>The history of the Martens Clause shows that it is a fundamental principle of international humanitarian law. Originating in the <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=9FE084CDAC63D10FC12563CD00515C4D">1899 Hague Convention</a>, versions of it appear in all four <a href="https://www.icrc.org/eng/assets/files/publications/icrc-002-0173.pdf#page=83">Geneva Conventions</a> and <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=6C86520D7EFAD527C12563CD0051D63C">Additional Protocol I</a>. It is cited in <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=056FD614A7D05D90C12563CD0051EC75">numerous</a> <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=CB3CAB98FF67D28EC12574C60038D63C">disarmament</a> <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=6D8BF0E4ABD74D62C125825D004955B1">treaties</a>. In 1995, concerns under the Martens Clause motivated countries to adopt a <a href="https://ihl-databases.icrc.org/ihl/INTRO/570">preemptive ban on blinding lasers</a>. </p>
<p>The principles of humanity require humane treatment of others and respect for human life and dignity. Fully autonomous weapons could not meet these requirements because they would be unable to feel compassion, an emotion that inspires people to minimize suffering and death. The weapons would also lack the legal and ethical judgment necessary to ensure that they protect civilians in complex and unpredictable conflict situations.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=436&fit=crop&dpr=1 600w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=436&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=436&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=548&fit=crop&dpr=1 754w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=548&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=548&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Under human supervision – for now.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Marine_Corps_Warfighting_Laboratory_MAGTAF_Integrated_Experiment_(MCWL)_160709-M-OB268-165.jpg">Pfc. Rhita Daniel, U.S. Marine Corps</a></span>
</figcaption>
</figure>
<p>In addition, as inanimate machines, these weapons could not truly understand the value of an individual life or the significance of its loss. Their algorithms would translate human lives into numerical values. By making lethal decisions based on such algorithms, they would reduce their human targets – whether civilians or soldiers – to objects, undermining their human dignity.</p>
<h2>Dictates of public conscience</h2>
<p>The growing opposition to fully autonomous weapons shows that they also conflict with the dictates of public conscience. Governments, experts and the general public have all objected, often on moral grounds, to the possibility of losing human control over the use of force.</p>
<p>To date, <a href="https://www.stopkillerrobots.org/wp-content/uploads/2018/04/KRC_CountryViews_13Apr2018.pdf">26 countries</a> have expressly supported a ban, including China. <a href="https://www.theguardian.com/commentisfree/2018/apr/11/killer-robot-weapons-autonomous-ai-warfare-un">Most countries</a> that have spoken at the U.N. meetings on conventional weapons have called for maintaining some form of meaningful human control over the use of force. Requiring such control is effectively the same as banning weapons that operate without a person who decides when to kill.</p>
<p>Thousands of <a href="https://futureoflife.org/open-letter-autonomous-weapons/">scientists and artificial intelligence experts</a> have endorsed a prohibition and demanded action from the United Nations. In July 2018, they issued a <a href="https://futureoflife.org/lethal-autonomous-weapons-pledge/">pledge not to assist</a> with the development or use of fully autonomous weapons. <a href="https://www.clearpathrobotics.com/2014/08/clearpath-takes-stance-against-killer-robots/">Major corporations</a> have also called for the prohibition.</p>
<p>More than 160 <a href="https://www.paxforpeace.nl/stay-informed/news/religious-leaders-call-for-a-ban-on-killer-robots">faith leaders</a> and more than 20 <a href="https://nobelwomensinitiative.org/nobel-peace-laureates-call-for-preemptive-ban-on-killer-robots/?ref=204">Nobel Peace Prize laureates</a> have similarly condemned the technology and backed a ban. Several <a href="http://www.openroboethics.org/wp-content/uploads/2015/11/ORi_LAWS2015.pdf">international</a> and <a href="http://duckofminerva.dreamhosters.com/wp-content/uploads/2013/06/UMass-Survey_Public-Opinion-on-Autonomous-Weapons.pdf">national</a> public opinion polls have found that a majority of people who responded opposed developing and using fully autonomous weapons.</p>
<p>The <a href="https://www.stopkillerrobots.org/">Campaign to Stop Killer Robots</a>, a coalition of 75 nongovernmental organizations from 42 countries, has led opposition by nongovernmental groups. Human Rights Watch, for which I work, co-founded and coordinates the campaign.</p>
<h2>Other problems with killer robots</h2>
<p>Fully autonomous weapons would <a href="https://www.hrw.org/sites/default/files/report_pdf/arms1216_web.pdf">threaten more</a> than humanity and the public conscience. They would likely violate other key rules of international law. Their use would create a gap in accountability because no one could be held individually liable for the unforeseeable actions of an autonomous robot.</p>
<p>Furthermore, the existence of killer robots would spark widespread proliferation and an arms race – dangerous developments made worse by the fact that fully autonomous weapons would be vulnerable to hacking or technological failures.</p>
<p>Bolstering the case for a ban, our Martens Clause assessment highlights in particular how delegating life-and-death decisions to machines would violate core human values. Our report finds that there should always be meaningful human control over the use of force. We urge countries at this U.N. meeting to work toward a new treaty that would save people from lethal attacks made without human judgment or compassion. A clear ban on fully autonomous weapons would reinforce the longstanding moral and legal foundations of international humanitarian law articulated in the Martens Clause.</p><img src="https://counter.theconversation.com/content/101427/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bonnie Docherty works as a senior researcher in the Arms Division of Human Rights Watch.</span></em></p>A standard element of international humanitarian law since 1899 should guide countries as they consider banning lethal autonomous weapons systems.Bonnie Docherty, Lecturer on Law and Associate Director of Armed Conflict and Civilian Protection, International Human Rights Clinic, Harvard Law School, Harvard UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/920872018-07-04T10:10:15Z2018-07-04T10:10:15ZWhy technology puts human rights at risk<figure><img src="https://images.theconversation.com/files/225514/original/file-20180629-117425-1akpxde.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/circuit-board-futuristic-server-code-processing-618880562?src=p61H5jBkfUQ_YT9RcaqgCg-1-3">Spainter_vfx/Shutterstock.com</a></span></figcaption></figure><p>Movies such as <a href="https://theconversation.com/uk/topics/2001-a-space-odyssey-32039">2001: A Space Odyssey</a>, <a href="https://theconversation.com/uk/topics/blade-runner-15885">Blade Runner</a> and <a href="https://theconversation.com/uk/topics/terminator-13739">Terminator</a> brought rogue robots and computer systems to our cinema screens. But these days, such classic science fiction spectacles don’t seem so far removed from reality. </p>
<p>Increasingly, we live, work and play with computational technologies that are autonomous and intelligent. These systems include software and hardware with the capacity for independent reasoning and decision making. They work for us on the factory floor; they decide whether we can get a mortgage; they track and measure our activity and fitness levels; they clean our living room floors and cut our lawns.</p>
<p>Autonomous and intelligent systems have the potential to affect almost every aspect of our social, economic, political and private lives, including mundane everyday aspects. Much of this seems innocent, but there is reason for concern. Computational technologies impact on every human right, from the right to life to the right to privacy, freedom of expression to social and economic rights. So how can we defend human rights in a technological landscape increasingly shaped by robotics and artificial intelligence (AI)?</p>
<h2>AI and human rights</h2>
<p>First, there is a real fear that increased machine autonomy will undermine the status of humans. This fear is compounded by a lack of clarity over who will be held to account, whether in a legal or a moral sense, when intelligent machines do harm. But I’m not sure that the focus of our concern for human rights should really lie with <a href="http://thehill.com/policy/defense/342659-top-us-general-warns-against-rogue-killer-robots">rogue robots</a>, as it seems to at present. Rather, we should worry about the human use of robots and artificial intelligence and their deployment in unjust and unequal political, military, economic and social contexts.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/gCcx85zbxz4?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>This worry is particularly pertinent with respect to lethal autonomous weapons systems (LAWS), often described as killer robots. As we <a href="https://theconversation.com/super-intelligence-and-eternal-life-transhumanisms-faithful-follow-it-blindly-into-a-future-for-the-elite-78538">move towards an AI arms race</a>, human rights scholars and campaigners such as Christof Heyns, the former UN special rapporteur on extrajudicial, summary or arbitrary executions, fear that the use of LAWS will put autonomous robotic systems <a href="https://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf">in charge of life and death decisions</a>, with limited or no human control. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/super-intelligence-and-eternal-life-transhumanisms-faithful-follow-it-blindly-into-a-future-for-the-elite-78538">Super-intelligence and eternal life: transhumanism's faithful follow it blindly into a future for the elite</a>
</strong>
</em>
</p>
<hr>
<p>AI also revolutionises the link between warfare and surveillance practices. Groups such as the <a href="https://www.icrac.net/about-icrac/">International Committee for Robot Arms Control (ICRAC)</a> recently expressed their opposition to Google’s participation in <a href="https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/">Project Maven</a>, a military program that uses machine learning to analyse drone surveillance footage, which can be used for extrajudicial killings. ICRAC <a href="https://www.icrac.net/open-letter-in-support-of-google-employees-and-tech-workers/">appealed</a> to Google to ensure that the data it collects on its users is never used for military purposes, joining protests by Google employees over the company’s involvement in the project. Google recently announced that it <a href="https://www.axios.com/military-artificial-intelligence-google-contract-5c570912-092c-4378-a54b-cf119296fb38.html">will not be renewing</a> its contract.</p>
<p>In 2013, the extent of surveillance practices was highlighted by the Edward Snowden <a href="https://www.theguardian.com/us-news/the-nsa-files">revelations</a>. These taught us much about the threat to the right to privacy and the sharing of data between intelligence services, government agencies and private corporations. The recent controversy surrounding <a href="https://theconversation.com/uk/topics/cambridge-analytica-51337">Cambridge Analytica</a>’s harvesting of personal data via the use of social media platforms such as Facebook continues to cause serious apprehension, this time over manipulation and interference into democratic elections that damage the right to freedom of expression.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/should-we-fear-the-rise-of-drone-assassins-two-experts-debate-87699">Should we fear the rise of drone assassins? Two experts debate</a>
</strong>
</em>
</p>
<hr>
<p>Meanwhile, critical data analysts challenge <a href="https://theconversation.com/machine-gaydar-ai-is-reinforcing-stereotypes-that-liberal-societies-are-trying-to-get-rid-of-83837">discriminatory practices</a> associated with what they call AI’s “white guy problem”. This is the concern that AI systems trained on existing data replicate existing racial and gender stereotypes that perpetuate discriminatory practices in areas such as policing, judicial decisions or employment.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/225515/original/file-20180629-117425-1o314b3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/225515/original/file-20180629-117425-1o314b3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=511&fit=crop&dpr=1 600w, https://images.theconversation.com/files/225515/original/file-20180629-117425-1o314b3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=511&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/225515/original/file-20180629-117425-1o314b3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=511&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/225515/original/file-20180629-117425-1o314b3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=642&fit=crop&dpr=1 754w, https://images.theconversation.com/files/225515/original/file-20180629-117425-1o314b3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=642&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/225515/original/file-20180629-117425-1o314b3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=642&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">AI can replicate and entrench stereotypes.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/man-holding-passport-photos-431266420?src=BDZda4VlwOgPvx9os7NRtg-1-73">Ollyy/Shutterstock.com</a></span>
</figcaption>
</figure>
<h2>Ambiguous bots</h2>
<p>The potential threat of computational technologies to human rights and to physical, political and digital security was highlighted in a recently published study on <a href="https://www.cser.ac.uk/news/malicious-use-artificial-intelligence/">The Malicious Use of Artificial Intelligence</a>. The concerns expressed in this University of Cambridge report must be taken seriously. But how should we deal with these threats? Are human rights ready for the era of robotics and AI? </p>
<p>There are ongoing efforts to update existing human rights principles for this era. These include the <a href="https://www.ohchr.org/Documents/Publications/GuidingPrinciplesBusinessHR_EN.pdf">UN Framing and Guiding Principles on Business and Human Rights</a>, attempts to write a <a href="https://www.bl.uk/my-digital-rights/videos/magna-carta-for-the-digital-age">Magna Carta for the digital age</a> and the Future of Life Institute’s <a href="https://futureoflife.org/ai-principles/">Asilomar AI Principles</a>, which identify guidelines for ethical research, adherence to values and a commitment to the longer-term beneficent development of AI.</p>
<p>These efforts are commendable but not sufficient. Governments and government agencies, political parties and private corporations, especially the leading tech companies, must commit to the ethical uses of AI. We also need effective and enforceable legislative control.</p>
<p>Whatever new measures we introduce, it is important to acknowledge that our lives are increasingly entangled with autonomous machines and intelligent systems. This entanglement enhances human well-being in areas such as medical research and treatment, in our transport system, in social care settings and in efforts to protect the environment.</p>
<p>But in other areas this entanglement throws up worrying prospects. Computational technologies are used to watch and track our actions and behaviours, trace our steps, our location, our health, our tastes and our friendships. These systems shape human behaviour and nudge us towards practices of self-surveillance that curtail our freedom and undermine the ideas and ideals of human rights.</p>
<p>And herein lies the crux: the capacity for dual use of computational technologies blurs the line between beneficent and malicious practices. What’s more, computational technologies are deeply implicated in the unequal power relationships between individual citizens, the state and its agencies, and private corporations. If unhinged from effective national and international systems of checks and balances, they pose a real and worrying threat to our human rights.</p><img src="https://counter.theconversation.com/content/92087/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Birgit Schippers does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Computational technologies impact on every human right.Birgit Schippers, Visiting Research Fellow, Senator George J Mitchell Institute for Global Peace, Security and Justice, Queen's University BelfastLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/860862018-01-29T11:27:39Z2018-01-29T11:27:39ZArtificial intelligence is the weapon of the next Cold War<figure><img src="https://images.theconversation.com/files/203575/original/file-20180126-100908-5sdqnl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">With artificial intelligence weapons on both sides, are we in a new cold war?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/hand-robot-holding-gun-isolated-on-431271838">Dim Dimich/Shutterstock.com</a></span></figcaption></figure><p>It is easy to confuse the current geopolitical situation with that of the 1980s. The United States and Russia <a href="https://www.nytimes.com/2017/09/01/us/politics/russia-election-hacking.html">each accuse</a> <a href="http://www.businessinsider.com/putin-accuses-the-us-of-interfering-in-russias-presidential-election-2017-11">the other</a> of interfering in <a href="http://www.businessinsider.com/release-the-memo-campaign-russia-linked-twitter-accounts-2018-1">domestic affairs</a>. Russia has <a href="http://www.newsweek.com/russia-crimea-ukraine-how-putin-took-territory-without-fight-640934">annexed territory</a> over U.S. objections, raising concerns about <a href="https://www.reuters.com/article/us-usa-ukraine-arms/u-s-says-it-will-provide-ukraine-with-defensive-aid-idUSKBN1EH00X">military conflict</a>.</p>
<p>As during the Cold War <a href="https://www.nps.gov/cham/learn/historyculture/the-cold-war-and-the-chamizal-dispute.htm">after World War II</a>, nations are developing and building weapons based on advanced technology. During the Cold War, the weapon of choice was nuclear missiles; today it’s software, whether its used for attacking <a href="https://theconversation.com/artificial-intelligence-cyber-attacks-are-coming-but-what-does-that-mean-82035">computer systems</a> or <a href="https://theconversation.com/ai-researchers-should-not-retreat-from-battlefield-robots-they-should-engage-them-head-on-45367">targets in the real world</a>.</p>
<p>Russian rhetoric about the importance of artificial intelligence is picking up – and with good reason: As artificial intelligence software develops, it will be able to make decisions based on more data, and more quickly, than humans can handle. As someone who researches the use of AI for applications as diverse as <a href="https://www.mprnews.org/story/2017/05/03/future-drones-student-competitions">drones</a>, <a href="https://doi.org/10.1109/SYSOSE.2017.7994957">self-driving vehicles</a> and <a href="http://ieeexplore.ieee.org/abstract/document/7500861/">cybersecurity</a>, I worry that the world may be entering – or perhaps already in – another cold war, fueled by AI. And I’m <a href="https://www.wired.com/story/ai-could-revolutionize-war-as-much-as-nukes/">not</a> <a href="http://foreignpolicy.com/2017/11/03/the-next-space-race-is-artificial-intelligence-and-america-is-losing-to-china/">alone</a>.</p>
<h2>Modern cold war</h2>
<p>Just like the the Cold War in the 1940s and 1950s, each side has reason to fear its opponent gaining a technological upper hand. In a recent meeting at the Strategic Missile Academy near Moscow, Russian President Vladmir Putin suggested that AI may be the way Russia can <a href="https://www.rt.com/news/414107-putin-military-ai-hint/">rebalance the power shift</a> created by the U.S. outspending Russia nearly 10-to-1 on defense each year. Russia’s state-sponsored <a href="https://www.rt.com/news/414107-putin-military-ai-hint/">RT media reported</a> AI was “key to Russia beating [the] U.S. in defense.”</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=473&fit=crop&dpr=1 600w, https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=473&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=473&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=594&fit=crop&dpr=1 754w, https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=594&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=594&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">What’s the 21st-century equivalent of ‘duck and cover’ against an artificial intelligence attack?</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Return-of-the-Nuclear-Era/6b8900b597714d339e4852f3e8d692e4/132/0">AP Photo, File</a></span>
</figcaption>
</figure>
<p>It sounds remarkably like the rhetoric of the Cold War, where the United States and the Soviets each built up enough nuclear weapons to <a href="http://www.historytoday.com/john-swift/soviet-american-arms-race">kill everyone on Earth many times over</a>. This arms race led to the concept of <a href="http://www.nuclearfiles.org/menu/key-issues/nuclear-weapons/history/cold-war/strategy/strategy-mutual-assured-destruction.htm">mutual assured destruction</a>: Neither side could risk engaging in open war without risking its own ruin. Instead, both sides stockpiled weapons and <a href="https://www.nps.gov/cham/learn/historyculture/the-cold-war-and-the-chamizal-dispute.htm">dueled</a> <a href="https://www.nps.gov/cham/learn/historyculture/the-cold-war-and-the-chamizal-dispute.htm">indirectly</a> via smaller armed conflicts and political disputes.</p>
<p>Now, more than 30 years after the end of the Cold War, the U.S. and Russia have decommissioned <a href="http://news.bbc.co.uk/2/hi/in_depth/6103398.stm">tens of thousands</a> of nuclear weapons. However, tensions are growing. Any modern-day cold war would include cyberattacks and nuclear powers’ involvement in allies’ conflicts. It’s already happening.</p>
<p>Both countries have <a href="https://www.bloomberg.com/news/articles/2017-08-31/u-s-orders-closing-of-russian-consulate-in-san-francisco">expelled the other’s diplomats</a>. Russia has <a href="http://www.newsweek.com/russia-crimea-ukraine-how-putin-took-territory-without-fight-640934">annexed</a> part of Crimea. The Turkey-Syria border war has even <a href="http://www.theweek.co.uk/in-depth/91141/why-the-turkey-syria-border-conflict-is-a-proxy-war-for-us-russia">been called</a> a “proxy war” between the U.S. and Russia.</p>
<p>Both countries – and <a href="http://www.icanw.org/the-facts/nuclear-arsenals/">many others too</a> – still have nuclear weapons, but their use by a major power is still unthinkable to most. However, <a href="http://www.newsweek.com/us-russia-start-new-arms-race-says-putin-ally-788354">recent</a> <a href="http://www.news.com.au/technology/innovation/the-us-and-russia-are-headed-towards-a-new-nuclear-arms-race/news-story/74da2261f90c9637464aab198c3d9caf">reports</a> show increased public concern that countries might use them.</p>
<h2>A world of cyberconflict</h2>
<p>Cyberweapons, however, particularly those powered by AI, are still considered <a href="http://www.businessinsider.com/us-retaliate-russia-hacking-election-2017-7">fair game</a> by <a href="http://www.newsweek.com/obama-ordered-cyber-bombs-response-russian-hacking-report-628597">both sides</a>.</p>
<p>Russia and <a href="https://theconversation.com/tracing-the-sources-of-todays-russian-cyberthreat-81593">Russian-supporting hackers</a> have <a href="https://www.usnews.com/news/politics/articles/2017-03-17/long-before-new-hacks-us-worried-by-russian-spying-efforts">spied electronically</a>, launched <a href="http://www.independent.co.uk/news/world/europe/russia-cyber-attack-ukraine-petya-telebots-blackenergy-sbu-cadbury-a7819501.html">cyberattacks</a> against <a href="https://www.nbcnews.com/news/us-news/feds-suspect-russians-behind-cyber-attacks-power-plants-n780701">power plants</a>, <a href="http://www.chicagotribune.com/news/opinion/commentary/ct-ransomware-attack-hacking-virus-20170515-story.html">banks, hospitals and transportation systems</a> – and <a href="http://www.npr.org/2017/08/10/542634370/russian-cyberattack-targeted-elections-vendor-tied-to-voting-day-disruptions">against U.S. elections</a>. Russian cyberattackers have targeted the <a href="https://www.wired.com/story/russian-hackers-attack-ukraine/">Ukraine</a> and U.S. allies <a href="https://www.theguardian.com/politics/2017/jun/25/cyber-attack-on-uk-parliament-russia-is-suspected-culprit">Britain</a> and <a href="https://www.reuters.com/article/us-germany-election-cyber/merkel-ally-cites-thousands-of-cyber-attacks-from-russian-ip-addresses-idUSKCN1BF1FA">Germany</a>. </p>
<p>The U.S. is <a href="https://www.scientificamerican.com/article/how-the-u-s-could-retaliate-against-russias-information-war/">certainly capable</a> of responding and <a href="https://www.washingtonpost.com/news/democracy-post/wp/2017/07/21/did-the-united-states-interfere-in-russian-elections/">may have done so</a>. </p>
<p>Putin has said he <a href="http://fortune.com/2017/09/04/ai-artificial-intelligence-putin-rule-world/">views artificial intelligence</a> as “the future, not only for Russia, but for all humankind.” In September 2017, he told students that the nation that “becomes the leader in this sphere will <a href="https://www.rt.com/news/401731-ai-rule-world-putin/">become the ruler of the world</a>.” Putin isn’t saying he’ll hand over the nuclear launch codes to a computer, though <a href="http://www.imdb.com/title/tt0086567/">science fiction</a> has portrayed <a href="https://theconversation.com/losing-control-the-dangers-of-killer-robots-58262">computers launching missiles</a>. He is talking about many other uses for AI.</p>
<h2>Use of AI for nuclear weapons control</h2>
<p>Threats posed by surprise attacks from <a href="http://www.jstor.org/stable/24997010">ship- and submarine-based</a> nuclear weapons and weapons placed near a country’s borders may lead some nations to entrust self-defense tactics – including launching counterattacks – to the rapid decision-making capabilities of an AI system.</p>
<p>In case of an attack, the AI could act more quickly and without the <a href="http://www.slate.com/articles/news_and_politics/politics/2014/04/air_force_s_nuclear_missile_corps_is_struggling_millennial_missileers_suffer.html">potential hesitation</a> or <a href="http://www.bbc.com/news/world-europe-41314948">dissent of a human operator</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=474&fit=crop&dpr=1 600w, https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=474&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=474&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=595&fit=crop&dpr=1 754w, https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=595&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=595&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Should a robot sit in this chair and be able to turn the key to launch a nuclear missile?</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Travel-Nuclear-Tourism/6fe411c36c7a4b729811ba281569023e/4/0">U.S. Air Force via AP</a></span>
</figcaption>
</figure>
<p>A fast, automated response capability could help ensure potential adversaries know a nation is ready and willing to launch, the key to <a href="http://www.nuclearfiles.org/menu/key-issues/nuclear-weapons/history/cold-war/strategy/strategy-mutual-assured-destruction.htm">mutual assured destruction</a>’s effectiveness as a deterrent. </p>
<h2>AI control of non-nuclear weapons</h2>
<p>AI can also be used to control non-nuclear weapons including unmanned vehicles like drones and cyberweapons. Unmanned vehicles must be able to operate while their communications are impaired – which requires onboard AI control. AI control also <a href="https://doi.org/10.1016/j.techsoc.2015.12.003">prevents a group that’s being targeted</a> from stopping or preventing a drone attack by destroying its <a href="http://foreignpolicy.com/2014/11/06/interview-with-a-u-s-air-force-drone-pilot-it-is-oddly-war-at-a-very-intimate-level/">control facility</a>, because control is distributed, both <a href="https://doi.org/10.1145/2810103.2810109">physically and electronically</a>. </p>
<p>Cyberweapons may, similarly, need to <a href="https://doi.org/10.1145/2810103.2810109">operate beyond the range of communications</a>. And reacting to them may require <a href="https://theconversation.com/artificial-intelligence-cyber-attacks-are-coming-but-what-does-that-mean-82035">such rapid response</a> that the responses would be best launched and controlled by AI systems. </p>
<p>AI-coordinated attacks can launch cyber or real-world weapons almost instantly, making the decision to attack before a human even notices a reason to. AI systems can change targets and techniques faster than humans can comprehend, much less analyze. For instance, an AI system might launch a drone to attack a factory, observe drones responding to defend, and launch a cyberattack on those drones, with no noticeable pause.</p>
<h2>The importance of AI development</h2>
<p>A country that thinks its adversaries have or will get AI weapons will want to get them too. Wide use of <a href="https://theconversation.com/artificial-intelligence-cyber-attacks-are-coming-but-what-does-that-mean-82035">AI-powered cyberattacks</a> may still be some time away. </p>
<p>Countries might agree to a proposed <a href="https://www.wired.com/2017/05/microsoft-right-need-digital-geneva-convention/">Digital Geneva Convention</a> to limit AI conflict. But that won’t stop AI attacks by <a href="https://www.wsj.com/articles/putin-says-anti-russian-sentiment-is-counterproductive-1496318628">independent nationalist groups</a>, militias, criminal organizations, terrorists and others – and countries can back out of treaties. It’s almost certain, therefore, that someone will turn AI into a weapon – and that everyone else will do so too, even if only out of a desire to be prepared to defend themselves.</p>
<p>With Russia <a href="http://fortune.com/2017/09/04/ai-artificial-intelligence-putin-rule-world/">embracing AI</a>, other nations that don’t or those that restrict AI development risk becoming <a href="https://doi.org/10.1016/j.clsr.2010.03.003">unable to compete</a> – economically or militarily – with countries wielding developed AIs. Advanced AIs can create advantage for a nation’s businesses, not just its military, and those without AI may be severely disadvantaged. Perhaps most importantly, though, having sophisticated AIs in many countries could provide a <a href="https://doi.org/10.1016/j.techsoc.2015.12.003">deterrent against attacks</a>, as happened with nuclear weapons during the Cold War.</p><img src="https://counter.theconversation.com/content/86086/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jeremy Straub is the associate director of the NDSU Institute for Cyber Security Education and Research. He has received funding related to AI and robotics from the North Dakota State University, the NDSU Foundation and Alumni Association, the U.S. National Science Foundation, the University of North Dakota and Sigma Xi. The views presented are his own and do not necessarily represent the views of NDSU or funding agencies.</span></em></p>As tensions between the US and Russia escalate, both sides are developing technological capabilities, including artificial intelligence that could be used in conflict.Jeremy Straub, Assistant Professor of Computer Science, North Dakota State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/876992017-11-28T11:54:20Z2017-11-28T11:54:20ZShould we fear the rise of drone assassins? Two experts debate<figure><img src="https://images.theconversation.com/files/196517/original/file-20171127-2025-26w2y0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">shutterstock</span> </figcaption></figure><p><em>A new short film from the <a href="https://www.stopkillerrobots.org/">Campaign Against Killer Robots</a> warns of a future where weaponised flying drones target and assassinate certain members of the public, using facial recognition technology to identify them. Is this a realistic threat that could rightly spur an effective ban on the technology? Or is it an overblown portrayal designed to scare governments into taking simplistic, unnecessary and ultimately futile action? We asked two academics for their expert opinions.</em></p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/HipTO_7mUOw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Overactive imagination risks panic and distress</h2>
<p><em>Peter Lee is a Reader in Politics and Ethics and Theme Director for Security and Risk Research and Innovation at the University of Portsmouth.</em></p>
<p>The newly released short film offers a bleak dystopia with humans at the mercy of “slaughterbots”. These are autonomous micro-drones with cameras, facial recognition software and lethal explosive charges. Utterly terrifying, and – the film claims – not science fiction but a near-future scenario that really could happen. The film warns with a frightening, deep voice: “They cannot be stopped.” The only salvation from this impending hell is, it is suggested, to ban killer robots. </p>
<p>This imaginative use of film to scare its viewers into action is the 21st-century version of the panic that HG Wells’s science fiction writings created in the early 20th century. New technologies can almost always be used for malevolent purposes but those same technologies – in this case flying robots, facial recognition, autonomous decision-making – can also drive widespread human benefit.</p>
<p>What about the killing part? Yes, three grams of explosive to the head could kill someone. But why go to the expense and trouble of making a lethal micro-drone? Such posturing about the widespread use of targeted, single-shot flying robots is a self-indulgence of technologically advanced societies. It would be hugely costly to develop such selective killing capability for use on a mass scale – certainly outside the capacity of terrorist organisations and, indeed, most militaries.</p>
<p>By comparison, in Rwanda in 1994, <a href="http://www.bbc.co.uk/news/world-africa-26875506">850,000 people</a> were killed in three months, mainly by machetes and garden tools. A <a href="https://theconversation.com/uk/topics/las-vegas-shooting-2017-44158">shooter in Las Vegas</a> killed at least 59 people and wounded more than 500 in only a few minutes. Meanwhile, in Germany, France and the UK, dozens of innocent people have been killed by terrorists using ordinary vehicles to commit murder. Cheap, easy and impossible to ban.</p>
<p>Bombing from aircraft was not outlawed at the <a href="http://www.airpowerstudies.co.uk/sitebuildercontent/sitebuilderfiles/aprvol13no3.pdf">1922-23 Peace Convention</a> at The Hague because governments didn’t want to surrender the security advantages it offered. Similarly, no government will want to relinquish the potential military benefit from drone technology.</p>
<p>Over-dramatic films and active imaginations might well cause panic and distress. But what is really needed is calm discussion and serious debate to put pressure on governments to use new technologies in ways that are beneficial to humankind – not ban them altogether. And where there are military applications, they should follow existing Laws of Armed Conflict and Geneva Conventions.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=446&fit=crop&dpr=1 600w, https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=446&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=446&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=560&fit=crop&dpr=1 754w, https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=560&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=560&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Here come the drones.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>A wake-up call on how robots could change conflicts</h2>
<p><em>Steve Wright is a Reader in the Politics and International Relations Group at Leeds Beckett University and a member of the International Campaign for Armed Robot Control.</em></p>
<p>The <a href="https://www.stopkillerrobots.org/">Campaign Against Killer Robots</a>’ terrifying new short film “Slaughterbots” predicts a new age of warfare and automated assassinations, if weapons that decide for themselves who to kill are not banned. The organisation hopes to pressure the UN to outlaw lethal robots under the <a href="https://www.un.org/disarmament/geneva/ccw/">Convention on Certain Conventional Weapons</a> (CCW), which has previously banned <a href="http://www.un.org/Depts/mine/UNDocs/ban_trty.htm">antipersonnel landmines</a>, <a href="http://www.article36.org/weapons/cluster-munitions/cluster-munitions-and-the-ccw/335/">cluster munitions</a> and <a href="http://www.weaponslaw.org/instruments/1995-protocol-on-blinding-laser-weapons">blinding lasers on the battlefield</a>.</p>
<p>Some have suggested that the new film is scaremongering. But the technologies needed to build such autonomous weapons – <a href="https://www.rt.com/news/395375-kalashnikov-automated-neural-network-gun/">intelligent targeting</a> algorithms, geo-location, facial recognition – <a href="https://icrac.net/2017/11/icrac-statement-at-the-2017-ccw-gge-meeting/">are already with us</a>. Many <a href="http://www.defenseone.com/technology/2014/10/inside-navys-secret-swarm-robot-experiment/95813/?oref=d-skybox">existing lethal drone systems</a> only operate in a semi-autonomous mode because of legal constraints and could do much more if allowed. It won’t take much to develop the technology so it has the capabilities shown in the film. </p>
<p>Perhaps the best way to see the film is less a realistic portrayal of how this technology will be used without a ban and more a wake-up call about how it could change conflicts. For some time to come, small arms and light weapons will remain the major instruments of political violence. But the film highlights how the intelligent targeting systems supposedly designed to minimise causalities could be used for a selective cull of an entire city. It’s easy to imagine how this might be put to use in a sectarian or ethnic conflict.</p>
<p>No international ban on inhumane weapons is absolutely watertight. The cluster munitions treaty <a href="https://www.nytimes.com/2016/09/02/world/middleeast/cluster-bombs-syria-yemen.html">has not prevented</a> Russia from using them in Syria, or Saudi Arabia bombing Yemeni civilians with <a href="https://www.theguardian.com/uk-news/2016/dec/18/uk-cluster-bombs-used-in-yemen-by-saudi-arabia-finds-research">old British stock</a>. But the landmine treaty has <a href="http://www.article36.org/weapons/landmines/fifteen-years-after-the-landmine-ban-the-number-of-new-casualties-halves/">halved the estimated number</a> of casualties – and even some of those states that have not ratified the ban, such as the US, now act as if they have. A ban on killer robots could have a similar effect.</p>
<p>Similarly, a ban might not remove all chance of terrorists using these weapons. The international arms market is too promiscuous. But it would remove potential stockpiles of killer robots by forcing governments to limit their manufacture.</p>
<p>Some have argued armed robotic systems might actually help reduce suffering in war since they don’t get tired, abuse captives, or act in self-defence or revenge. <a href="https://www.cc.gatech.edu/ai/robot-lab/online-publications/GIT-GVU-09-02.pdf">They believe</a> that autonomous weapons could be programmed to uphold international law better than humans do.</p>
<p>But, as Prof Noel Sharkey of the <a href="https://icrac.net/">International Campaign for Armed Robot Control</a> points out, this view is based on the fantasy of robots being super smart terminators when today “they have the <a href="http://moralmachines.blogspot.co.uk/2008/12/killer-robots-or-friendly-fridges.html">intelligence of a fridge</a>”. While the technology to enable killer robots exists, without the technology to restrain them, a ban is our best hope of avoiding the kind of scenario shown in the film.</p><img src="https://counter.theconversation.com/content/87699/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Steve Wright is affiliated with the International Campaign for Armed Robot Control (ICRAC). </span></em></p><p class="fine-print"><em><span>Peter Lee does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The Campaign Against Killer Robots has launched a terrifying film showing why lethal drones need to be banned.Peter Lee, Reader in Politics and Ethics, University of PortsmouthSteve Wright, Reader, Politics and International Relations Group, Leeds Beckett UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/867582017-11-06T19:21:32Z2017-11-06T19:21:32ZDear Prime Minister: we’d like you to join the call for a ban on killer robots<figure><img src="https://images.theconversation.com/files/193335/original/file-20171106-1008-l1e2hq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A US Air Force MQ-9 Reaper drone is piloted remotely over Afghanistan. But what if AI was to take control? </span> <span class="attribution"><a class="source" href="http://www.afrc.af.mil/News/Photos/igphoto/2000608254/">US Air Force Photo/Lt Col Leslie Pratt.</a></span></figcaption></figure><p>Leading researchers in robotics and artificial intelligence (AI) from Australia and Canada have today published open letters calling on their respective Prime Ministers to take a stand against weaponising AI.</p>
<p>The letters ask that Australia and Canada be the next countries to call for a ban on lethal autonomous weapons at the upcoming United Nations (UN) disarmament conference, the strangely named Conference on the Convention on Certain Conventional Weapons (<a href="https://www.un.org/disarmament/geneva/ccw/">CCW</a>) to be held in Geneva later this month.</p>
<p>To date, 19 countries have called for a pre-emptive ban on autonomous weapons: Algeria, Argentina, Bolivia, Chile, Costa Rica, Cuba, Ecuador, Egypt, Ghana, Guatemala, Holy See, Mexico, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Venezuela and Zimbabwe. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/no-problem-too-big-1-artificial-intelligence-and-killer-robots-77957">No problem too big #1: Artificial intelligence and killer robots</a>
</strong>
</em>
</p>
<hr>
<h2>Before Terminator</h2>
<p>Lethal autonomous weapons are often described as “killer robots”. This paints a deceptive picture in most people’s minds. </p>
<p>We’re not talking about a movie-style Terminator, but rather much simpler technologies that are potentially only a few years away. Think of a predator drone flying above the skies of Iraq but replace the human pilot with a computer. Now, a computer could make the final life or death decision to fire its Hellfire missile.</p>
<p>I’m most worried not about smart AI but stupid AI. We will be giving machines the right to make such life-or-death decisions, but current technologies are not capable of making such decisions correctly. </p>
<p>In the longer term, autonomous weapons will become more capable. But my concern then shifts to how such weapons will destabilise the geopolitical order and ultimately become another weapon of mass destruction. </p>
<p>The Australian letter was released simultaneously with one signed by hundreds of AI experts in Canada, including two of the founders of Deep Learning, AI pioneers <a href="http://www.cs.toronto.edu/%7Ehinton/">Geoffrey Hinton</a> and <a href="http://www.iro.umontreal.ca/%7Ebengioy/yoshua_en/">Yoshua Bengio</a>. The Canadian letter urges Prime Minister Justin Trudeau to support such a ban.</p>
<p>In the interest of full disclosure, I organised the <a href="https://www.cse.unsw.edu.au/%7Etw/letter.pdf">Australian letter</a>. It is signed by a dozen or so Deans and Heads of Schools, as well as dozens of professors of AI and robotics. In total 122 faculty members working in AI and robotics in Australia have signed the letter. </p>
<p>The letter says lethal autonomous weapons lacking meaningful human control sit on the wrong side of a clear moral line. It adds:</p>
<blockquote>
<p>To this end, we ask Australia to announce its support for the call to ban lethal autonomous weapons systems at the upcoming UN Conference on CCW. Australia should also commit to working with other states to conclude a new international agreement that achieves this objective.</p>
<p>In this way, our government can reclaim its position of moral leadership on the world stage as demonstrated previously in other areas like the non-proliferation of nuclear weapons.</p>
<p>With Australia’s recent election to the UN’s Human Rights Council, the issue of lethal autonomous weapons is even more pressing for Australia to address.</p>
</blockquote>
<h2>Support is growing</h2>
<p>The AI and robotics communities have sent a clear and consistent message over the past couple of years about this issue. In 2015, thousands of AI and robotics researchers from around the world signed <a href="https://theconversation.com/open-letter-we-must-stop-killer-robots-before-they-are-built-44577">an open letter released at the start of the main AI conference</a> calling for a ban. </p>
<p>Most recently, industry joined the call when in August this year <a href="https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war">more than 100 founders of AI and robotics companies warned</a> of opening “the Pandora’s box” and asking the UN to take urgent action.</p>
<p>The UN is listening and taking action, though like all things diplomatic, progress is not rapid. In December 2016, after three years of informal talks, the UN decided to begin formal discussions within a <a href="https://www.un.org/disarmament/geneva/ccw/meetings-of-the-gge/">Group of Governmental Experts</a>. As the name suggests, this is a group of technical, legal and political experts chosen by the member states to make recommendations about autonomous weapons that could contribute to but not negotiate a treaty banning their use.</p>
<p>This group meets for the first time in Geneva next Monday. They will discuss topics such as should autonomous weapons always have “meaningful human control”? and what does this mean in practice? </p>
<h2>An AI arms race</h2>
<p>The international non-government body <a href="https://www.hrw.org/">Human Rights Watch</a> has invited me to the meeting and I will speak about the dangers of not taking action to ban autonomous weapons. Without a ban, there will be an arms race to develop increasingly capable autonomous weapons.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-we-signed-the-open-letter-from-scientists-supporting-a-total-ban-on-nuclear-weapons-75209">Why we signed the open letter from scientists supporting a total ban on nuclear weapons</a>
</strong>
</em>
</p>
<hr>
<p>This has rightly been <a href="https://futureoflife.org/autonomous-weapons-open-letter-2017">described as the third revolution in warfare</a>. The first revolution was the invention of gunpowder. The second was the invention of nuclear bombs. This third revolution would be another step change in the speed and efficiency with which we could kill. </p>
<p>For these will be weapons of mass destruction. One programmer will be able to control a whole army. Every other weapon of mass destruction has been banned or is in the process of being banned: <a href="https://www.un.org/disarmament/wmd/chemical/">chemical weapons</a> and <a href="https://www.un.org/disarmament/wmd/bio/">biological weapons</a> are banned, and a <a href="https://theconversation.com/were-close-to-banning-nuclear-weapons-killer-robots-must-be-next-80741">nuclear weapons treaty</a> recently reached the 50 signatures required to become law. We must add autonomous weapons to the list of weapons that are morally unacceptable to use.</p>
<p>We cannot stop AI technology being developed. It will be used for many peaceful purposes like autonomous cars. But we can make it morally unacceptable to use to kill, as we have decided with chemical and biological weapons.</p>
<p>This, I hope, will make the world a safer and better place.</p><img src="https://counter.theconversation.com/content/86758/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Leading experts in AI and robotics want the Prime Ministers of Australia and Canada to join the growing campaign to ban killer robots.Toby Walsh, Research Group Leader at Data61, Professor of AI, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/829632017-08-25T12:20:46Z2017-08-25T12:20:46ZNever mind killer robots – even the good ones are scarily unpredictable<figure><img src="https://images.theconversation.com/files/183350/original/file-20170824-18702-1fxlsp9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Who could have predicted it would end like this?</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>The heads of more than 100 of the world’s top artificial intelligence companies are very alarmed about the development of “killer robots”. In an <a href="https://futureoflife.org/autonomous-weapons-open-letter-2017">open letter</a> to the UN, these business leaders – including Tesla’s Elon Musk and the founders of Google’s DeepMind AI firm – warned that autonomous weapon technology could be misused by terrorists and despots or hacked to perform in undesirable ways.</p>
<p>But the real threat is much bigger – and not just from human misconduct but from the machines themselves. The research into complex systems shows how behaviour can emerge that is much more unpredictable than the sum of individual actions. On one level this means human societies can behave very differently to what you might expect just looking at individual behaviour. But it can also apply to technology. Even ecosystems of relatively simple AI programs – what we call stupid, good bots – can surprise us, and even when the individual bots are behaving well.</p>
<p>The individual elements that make up complex systems, such as economic markets or global weather, tend not to interact in a simple linear way. This make these systems very hard to model and understand. For example, even after many years of climatology, it’s still impossible to make long-term weather predictions. These systems are often very sensitive to small changes and can experience explosive feedback loops. It is also very difficult to know the precise state of such a system at any one time. All these things make these systems intrinsically unpredictable. </p>
<p>All these principles apply to large groups of individuals acting in their own way, whether that’s human societies or groups of AI bots. My colleagues and I <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0171774">recently studied</a> one type of a complex system that featured good bots used to automatically edit Wikipedia articles. These different bots are designed and exploited by Wikipedia’s trusted human editors and their underlying software is open-source and available for anyone to study. Individually, they all have a common goal of improving the encyclopaedia. Yet their collective behaviour turns out to be surprisingly inefficient.</p>
<p>These Wikipedia bots work based on well-established rules and conventions, but because the website doesn’t have a central management system there is no effective coordination between the people running different bots. As a result, we found pairs of bots that have been undoing each other’s edits for several years without anyone noticing. And of course, because these bots lack any cognition, they didn’t notice it either.</p>
<p>The bots are designed to speed up the editing process. But slight differences in the design of the bots or between people who use them can lead to a massive waste of resources in an ongoing “edit war” that would have been resolved much quicker with human editors.</p>
<p>We also found that the bots behaved differently in different language editions of Wikipedia. The rules are more or less the same, the goals are identical, the technology is similar. But in German Wikipedia, the collaboration between bots is much more efficient and productive compared to, for example, Portuguese Wikipedia. This can only be explained by the differences between the human editors who run these bots in different environments.</p>
<h2>Exponential confusion</h2>
<p>Wikipedia bots have very little autonomy and the system already operates very differently to the goals of individual bots. But the Wikimedia Foundation is <a href="https://blog.wikimedia.org/2017/07/19/scoring-platform-team/">planning to use</a> AI that will give more autonomy to the bots. That will likely lead to even more unexpected behaviour. </p>
<p>Another example is what can happen when two bots designed to speak to humans interact with each other. We’re no longer surprised by the answers given by artificial personal assistants such as the iPhone’s Siri. But put several of these kind of chatbots together and they can quickly start acting in surprising ways, arguing and even insulting each other. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/WnzlbyTZsQY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>The bigger the system becomes and the more autonomous each bot is, the more complex and hence unpredictable the future behaviour of the system will be. Wikipedia is an example of large number of relatively simple bots. The chatbots example is a small number of rather sophisticated and creative bots – in both cases unexpected conflicts emerged. The complexity and therefore unpredictability increases exponentially as you add more and more individuals to the system. So in a future system with a large number of very sophisticated robots, the unexpected behaviour could go beyond our imagination.</p>
<h2>Self-driving madness</h2>
<p>For example, self-driving cars promise exciting advances in the efficiency and safety of road travel. But we don’t yet know what will happen once we have a large, wild system of fully autonomous vehicles. They may well behave very differently to a small set of individual cars in a controlled environment. And even more unexpected behaviour might occur when driverless cars “trained” by different humans in different environments start interacting with each another.</p>
<p>Humans can adapt to new rules and conventions relatively quickly but can still have trouble switching between systems. This can be way more difficult for artificial agents. If a “German-trained” car was driving in Italy, for example, we just don’t know how it would deal with the written rules and unwritten cultural conventions being followed by the many other “Italian-trained” cars. Something as common as crossing an intersection could become lethally risky because we just wouldn’t know if the cars would interact as they were supposed to or whether they would do something completely unpredictable.</p>
<p>Now think of the killer robots that Elon Musk and his colleagues are worried about. A single killer robot could be very dangerous in wrong hands. But what about an unpredictable system of killer robots? I don’t even want to think about it.</p><img src="https://counter.theconversation.com/content/82963/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Taha Yasseri receives funding from the European Commission and Google. </span></em></p>The unexpected behaviour of even simple bots is only going to get more dramatic as AI scales up.Taha Yasseri, Research Fellow in Computational Social Science, Oxford Internet Institute, University of OxfordLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/702452016-12-30T21:11:36Z2016-12-30T21:11:36ZFinding trust and understanding in autonomous technologies<p>In 2016, self-driving cars went mainstream. Uber’s autonomous vehicles <a href="http://fortune.com/2016/09/14/uber-self-driving-cars-pittsburgh/">became ubiquitous</a> in neighborhoods where I live in Pittsburgh, and <a href="http://www.huffingtonpost.com/entry/california-stops-uber-self-driving-cars_us_585bda66e4b0d9a594573319">briefly in San Francisco</a>. The U.S. Department of Transportation issued <a href="https://www.transportation.gov/sites/dot.gov/files/docs/AV%20policy%20guidance%20PDF.pdf">new regulatory guidance</a> for them. Countless <a href="http://dx.doi.org/10.1126/science.aaf2654">papers</a> and <a href="http://www.nytimes.com/2016/06/24/technology/should-your-driverless-car-hit-a-pedestrian-to-save-your-life.html">columns</a> discussed how self-driving cars <a href="http://blog.caranddriver.com/self-driving-mercedes-will-prioritize-occupant-safety-over-pedestrians/">should</a> <a href="https://www.theguardian.com/technology/2016/aug/22/self-driving-cars-moral-dilemmas">solve</a> <a href="https://theconversation.com/helping-autonomous-vehicles-and-humans-share-the-road-68044">ethical quandaries</a> when things go wrong. And, unfortunately, 2016 also saw the <a href="http://www.nytimes.com/2016/07/01/business/self-driving-tesla-fatal-crash-investigation.html">first fatality involving an autonomous vehicle</a>.</p>
<p>Autonomous technologies are rapidly spreading beyond the transportation sector, into <a href="http://dx.doi.org/10.1126/scitranslmed.aad9398">health care</a>, <a href="https://www.cybergrandchallenge.com/">advanced cyberdefense</a> and even <a href="http://www.raytheon.com/capabilities/products/phalanx/">autonomous weapons</a>. In 2017, we’ll have to decide whether we can trust these technologies. That’s going to be much harder than we might expect.</p>
<p>Trust is complex and varied, but also a key part of our lives. We often trust technology <a href="http://jcr.sagepub.com/content/2/4/265">based on predictability</a>: I trust something if I know what it will do in a particular situation, even if I don’t know why. For example, I trust my computer because I know how it will function, including when it will break down. I stop trusting if it starts to behave differently or surprisingly. </p>
<p>In contrast, my trust in my wife is based on <a href="https://www.jstor.org/stable/259288">understanding her beliefs, values and personality</a>. More generally, interpersonal trust does not involve knowing exactly what the other person will do – my wife certainly surprises me sometimes! – but rather why they act as they do. And of course, we can trust someone (or something) in both ways, if we know both what they will do and why.</p>
<p>I have been exploring possible bases for our trust in self-driving cars and other autonomous technology from both ethical and psychological perspectives. These are devices, so predictability might seem like the key. Because of their autonomy, however, we need to consider the importance and value – and the challenge – of learning to trust them in the way we trust other human beings.</p>
<h2>Autonomy and predictability</h2>
<p>We want our technologies, including self-driving cars, to behave in ways we can predict and expect. Of course, these systems can be quite sensitive to the context, including other vehicles, pedestrians, weather conditions and so forth. In general, though, we might expect that a self-driving car that is repeatedly placed in the same environment should presumably behave similarly each time. But in what sense would these highly predictable cars be autonomous, rather than merely automatic?</p>
<p><a href="https://ntrs.nasa.gov/search.jsp?R=19790007441">There have</a> <a href="http://dx.doi.org/10.1080/001401399185595">been</a> <a href="http://ws680.nist.gov/publication/get_pdf.cfm?pub_id=823618">many</a> different <a href="http://dx.doi.org/10.5898/JHRI.3.2.Beer">attempts</a> to <a href="http://www.dtic.mil/dtic/tr/fulltext/u2/a601656.pdf">define</a> <a href="http://standards.sae.org/j3016_201609/">autonomy</a>, but they all have this in common: Autonomous systems can make their own (substantive) decisions and plans, and thereby can act differently than expected. </p>
<p>In fact, one reason to employ autonomy (as distinct from automation) is precisely that those systems can pursue unexpected and surprising, though justifiable, courses of action. For example, <a href="https://deepmind.com/research/alphago/">DeepMind’s AlphaGo</a> won the second game of its recent Go series against Lee Sedol in part because of <a href="https://www.wired.com/2016/03/googles-ai-viewed-move-no-human-understand/">a move that no human player would ever make, but was nonetheless the right move</a>. But those same surprises make it difficult to establish predictability-based trust. Strong trust based solely on predictability is arguably possible only for automated or automatic systems, precisely because they are predictable (assuming the system functions normally).</p>
<h2>Embracing surprises</h2>
<p>Of course, other people frequently surprise us, and yet we can trust them to a remarkable degree, even giving them life-and-death power over ourselves. Soldiers trust their comrades in complex, hostile environments; a patient trusts her surgeon to excise a tumor; and in a more mundane vein, my wife trusts me to drive safely. This interpersonal trust enables us to embrace the surprises, so perhaps we could develop something like interpersonal trust in self-driving cars?</p>
<p>In general, interpersonal trust requires an understanding of why someone acted in a particular way, even if you can’t predict the exact decision. My wife might not know exactly how I will drive, but she knows the kinds of reasoning I use when I’m driving. And it is actually relatively easy to understand why someone else does something, precisely because we all think and reason roughly similarly, though with different “raw ingredients” – our beliefs, desires and experiences. </p>
<p>In fact, we continually and unconsciously make inferences about other people’s beliefs and desires based on their actions, in large part by assuming that they think, reason and decide roughly as we do. All of these inferences and reasoning based on our shared (human) cognition enable us to understand someone else’s reasons, and thereby build interpersonal trust over time.</p>
<h2>Thinking like people?</h2>
<p>Autonomous technologies – self-driving cars, in particular – do not think and decide like people. There have been efforts, both <a href="http://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=33607">past</a> and <a href="http://dx.doi.org/10.1126/science.aab3050">recent</a>, to develop computer systems that think and reason like humans. However, one consistent theme of machine learning over the past two decades has been the enormous gains made precisely by not requiring our artificial intelligence systems to operate in human-like ways. Instead, machine learning algorithms and systems such as AlphaGo have often been able to <a href="http://dx.doi.org/10.1038/nature16961">outperform human experts</a> by focusing on specific, localized problems, and then solving them quite differently than humans do.</p>
<p>As a result, attempts to interpret an autonomous technology in terms of human-like beliefs and desires can go spectacularly awry. When a human driver sees a ball in the road, most of us automatically slow down significantly, to avoid hitting a child who might be chasing after it. If we are riding in an autonomous car and see a ball roll into the street, we expect the car to recognize it, and to be prepared to stop for running children. The car might, however, see only an obstacle to be avoided. If it swerves without slowing, the humans on board might be alarmed – and a kid might be in danger.</p>
<p>Our inferences about the “beliefs” and “desires” of a self-driving car will almost surely be erroneous in important ways, precisely because the car doesn’t have any human-like beliefs or desires. We cannot develop interpersonal trust in a self-driving car simply by watching it drive, as we will not correctly infer the whys behind its actions. </p>
<p>Of course, society or marketplace customers could insist en masse that self-driving cars have human-like (psychological) features, precisely so we could understand and develop interpersonal trust in them. This strategy would give a whole new meaning to “<a href="http://dx.doi.org/10.1002/9781118984390.ch1">human-centered design</a>,” since the systems would be designed specifically so their actions are interpretable by humans. But it would also require including novel <a href="http://stanford.edu/%7Enikmart/papers/hri16paper_CameraReady_small.pdf">algorithms</a> and <a href="http://repository.cmu.edu/cgi/viewcontent.cgi?article=1147&context=dissertations">techniques</a> in the self-driving car, all of which would represent a massive change from current research and development strategies for self-driving cars and other autonomous technologies.</p>
<p>Self-driving cars have the potential to radically reshape our transportation infrastructure in many beneficial ways, but only if we can trust them enough to actually use them. And ironically, the very feature that makes self-driving cars valuable – their flexible, autonomous decision-making across diverse situations – is exactly what makes it hard to trust them.</p><img src="https://counter.theconversation.com/content/70245/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Danks does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The ethics and psychology of trust suggest ways we might learn to understand self-driving cars, but also show why doing so might be more challenging than we expect.David Danks, Professor of Philosophy and Psychology, Carnegie Mellon UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/681552016-12-14T19:08:28Z2016-12-14T19:08:28ZStar Wars: Rogue One highlights an uncomfortable fact – military robots can change sides<p>The latest Star Wars movie, <a href="http://www.imdb.com/title/tt3748528/">Rogue One</a> introduces us to a new droid <a href="http://starwars.wikia.com/wiki/K-2SO">K-2SO</a> that is the robotic lead of the story. </p>
<p>Without giving away too many spoilers, K-2SO is part of the Rebellion freedom fighter group that are tasked with stealing the plans to the first <a href="http://starwars.wikia.com/wiki/Death_Star">Death Star</a>, the infamous moon-sized battle station from the original <a href="http://www.imdb.com/title/tt0076759/">Star Wars</a> movie.</p>
<p>The significance of K-2SO is his back-story. K-2SO is an autonomous military robot that used to fight for the Rebellion’s enemy – the Imperial Empire. He was captured and reprogrammed by the rebels and is now a core member of Rogue One group.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/YWNvdoRnNv8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Rogue One: A Star Wars Story trailer ‘Trust’</span></figcaption>
</figure>
<p>K-2SO is not the first robot to swap sides in a movie. Remember the Terminator’s initial mission was to kill Sarah Connor in the <a href="http://www.imdb.com/title/tt0088247/">first movie</a>, before being reprogrammed in later movies to protect her and her son John Connor.</p>
<p>This does then raise the question of whether in real life a programmed military machine could be encouraged, reprogrammed or hacked to defect.</p>
<h2>Soldiers swapping sides</h2>
<p>The idea of human soldiers swapping sides during wars and conflicts is nothing new. There are numerous examples of soldiers surrendering and then announcing that they have information and would like to help and sometimes fight for their captors.</p>
<p>It is the information about battle plans and tactics that these defecting soldiers have that could potentially change the course of a battle or a military campaign.</p>
<p>One of the most famous defectors was <a href="http://www.biography.com/people/benedict-arnold-9189320">General Benedict Arnold</a>. Arnold was a general of the American Army during the American War of Independence, but he defected to the British Army and became a brigadier general. He led British forces against the Americans and retired to London after the war.</p>
<h2>Weapons technology</h2>
<p>The industrial revolution and the rise of mechanical weapons such as tanks, aircraft and submarines in the early 20th century changed the nature of defecting.</p>
<p>It was the development of more and more advanced weapons that gave a nation its advantage over its military rivals. Stealing an enemy’s new weapons was almost impossible and so it was up to defectors to deliver the plans of the new weapons or sometimes, examples of the actual weapons to the other side. </p>
<p><a href="https://www.strategypage.com/cic/docs/cic304b.asp">Martin Monti</a>, of the United States Army Air Corps, defected to Italy during 1944 and handed over a photographic reconnaissance version of the <a href="http://www.lockheedmartin.com.au/us/100years/stories/p-38.html">P-38 Lightning</a> aircraft to the Nazi military. He then joined the Nazi SS. </p>
<p>In 1976, during the Cold War, <a href="http://www.bbc.com/future/story/20160905-the-pilot-who-stole-a-secret-soviet-fighter-jet">Viktor Belenko</a>, flew his highly-secret MiG-25 jet fighter from the USSR to Japan.</p>
<p>NATO had long wanted to get the technical details of this aircraft as it was rumoured to be able to fly three times faster than the speed of sound.</p>
<p>Japan gave the US access to the MiG and Belenko was eventually granted citizenship of the US. The plane was stripped and analysed by the Americans who also had a copy of the aircraft’s technical manual that Belenko had also brought with him. </p>
<h2>Defectors not necessary</h2>
<p>In the 21st century we have seen the development of remotely controlled systems for reconnaissance, surveillance and the delivery of weapons to targets. Such systems are likely <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2722311">to be very important</a> in the future of defence capabilities.</p>
<p>As this equipment does not require a person on-board, it means that human defectors or spies are no longer required to deliver this robotic hardware to the opposition.</p>
<p>It is impossible to know for sure when the first unmanned system was successfully captured. But because these systems rely on external radio commands and infrastructure, such as GPS, <a href="http://www.forbes.com/sites/thomasbrewster/2015/08/08/qihoo-hacks-drone-gps/">it is plausible</a> that they can be taken over and captured and it has almost certainly already happened.</p>
<p>In 2011, a US Air Force drone came down in Iran and <a href="https://www.washingtonpost.com/world/national-security/iran-says-it-downed-us-stealth-drone-pentagon-acknowledges-aircraft-downing/2011/12/04/gIQAyxa8TO_story.html">was recovered by the Iranian state</a>. That aircraft was a highly secretive <a href="http://www.af.mil/AboutUs/FactSheets/Display/tabid/224/Article/104547/rq-170-sentinel.aspx">RQ-170</a> stealth drone and the Iranians <a href="http://www.dailytech.com/Iran+Yes+We+Hacked+the+USs+Drone+and+Heres+How+We+Did+It/article23533.htm">claimed that they had “spoofed” the drone</a> into landing in Iran by creating fake GPS signals. </p>
<p>Experts in the US <a href="http://www.techworld.com/news/security/spy-drone-gps-spoofing-claims-doubted-by-security-analysts-3326032/">doubted those claims</a>, but however the drone was captured, Iran ended up with a nearly intact state-of-the-art stealth drone. </p>
<p>They put it on display to international media and stated that they would reverse engineer it and create their own version of this high-tech robotic surveillance aircraft. Iran now appears to <a href="https://theaviationist.com/2016/10/02/iran-unveils-new-ucav-modeled-on-captured-u-s-rq-170-stealth-drone/">have a squadron of these stealth drones</a>, all based on the original captured aircraft.</p>
<h2>Trusting autonomous robots</h2>
<p>An obvious way to prevent the claimed GPS-spoofing or other similar hacks is to create systems that are truly autonomous and do not require or use external communication systems. </p>
<p>Such robots should be immune to hacking once deployed on their missions. But the development and use of truly autonomous robot weapon systems is a controversial topic. </p>
<p><a href="https://www.stopkillerrobots.org/">The Campaign to Stop Killer Robots</a> was launched in 2013 to both educate the general public about the possible dangers of autonomous killer robots and to try and influence the highest-level decision makers in governments and at the United Nations that such robots should be banned.</p>
<p>The principal of the campaign is that a <a href="https://www.theguardian.com/sustainable-business/2016/dec/02/the-moral-challenge-of-military-robots-arises-when-we-delegate-fighting-wars">human should make the final decision</a> before a weapon is launched at its intended target. </p>
<p>The International Committee of the Red Cross has pointed out that the so-called “<a href="https://www.icrc.org/eng/resources/documents/audiovisuals/video/2014/rules-of-war.htm">rules of war</a>” must be coded into autonomous military robots of the future. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/HwpzzAefx9M?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The International Committee of the Red Cross video about the rules of war and future autonomous military robots.</span></figcaption>
</figure>
<p>Some robotics engineers and researchers are working on exactly this and have started to develop the algorithms that will <a href="http://www.nytimes.com/2015/01/11/magazine/death-by-robot.html">enable autonomous military robots to be ethical</a>. They propose that robots may be able to be <a href="http://www.huffingtonpost.com.au/entry/lethal-autonomous-weapons-ronald-arkin_us_574ef3bbe4b0af73af95ea36">protect civilians better than human soldiers</a>. </p>
<p>But all of this assumes that the human creators of the robots are acting ethically and want the robots to also be ethical. </p>
<p>What happens if a future autonomous soldier robot is tasked with doing something that it decides goes against its code of ethics? Will it just say “no”, or will it conclude that the most appropriate action is to turn on its owner? Would it defect to the other side? How would loyalty be built into an autonomous robot and how would the robot’s creator ensure that it could be trusted to not switch sides? </p>
<p>In the coming years you are likely to read dozens of stories about <a href="http://spectrum.ieee.org/static/special-report-trusting-robots">research into trusted autonomy</a>. It is a hot research topic and a critically important one as the world begins to outsource its fighting to robots.</p>
<p>Rogue One: A Star Wars Story may be set a long time ago in a galaxy far, far away, but its plot lines are actually based in our reality.</p>
<p>Dealing with states that build frightening new weapons, stealing plans to those weapons and then fighting back with robots is not science fiction. And it may be that soon we see those fighting robots turn on their creators.</p><img src="https://counter.theconversation.com/content/68155/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jonathan Roberts does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Rebel fighters in the latest Star Wars movie are helped by a droid that was captured from the enemy and reprogrammed. Could that happen in real life with today’s autonomous weapons?Jonathan Roberts, Professor in Robotics, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/582622016-06-16T09:57:22Z2016-06-16T09:57:22ZLosing control: The dangers of killer robots<figure><img src="https://images.theconversation.com/files/125442/original/image-20160606-13040-16h7t1k.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Should we act to prevent this from ever happening?</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/pic-312599078/stock-photo-sci-fi-fantasy-d-robot-the-killer-with-titaniumn-amor.html">Armed robot via shutterstock.com</a></span></figcaption></figure><p>New technology could lead humans to relinquish control over decisions to use lethal force. As artificial intelligence advances, the possibility that machines could independently select and fire on targets is <a href="http://futureoflife.org/open-letter-autonomous-weapons/">fast approaching</a>. Fully autonomous weapons, also known as “killer robots,” are quickly moving from the realm of science fiction toward reality.</p>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The unmanned Sea Hunter gets underway. At present it sails without weapons, but it exemplifies the move toward greater autonomy.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Sea_Hunter_gets_underway_on_the_Willamette_River_following_a_christening_ceremony_in_Portland,_Ore._(25702146834).jpg">U.S. Navy/John F. Williams</a></span>
</figcaption>
</figure>
<p>These weapons, which could operate on land, in the air or at sea, threaten to revolutionize armed conflict and law enforcement in alarming ways. <a href="http://www.unog.ch/80256EDD006B8954/%28httpAssets%29/54B1B7A616EA1D10C1257CCC00478A59/$file/Article_Arkin_LAWS.pdf">Proponents say these killer robots are necessary</a> because modern combat moves so quickly, and because having robots do the fighting would keep soldiers and police officers out of harm’s way. But the threats to humanity would outweigh any military or law enforcement benefits. </p>
<p>Removing humans from the targeting decision would create a dangerous world. Machines would make life-and-death determinations outside of human control. The risk of disproportionate harm or erroneous targeting of civilians would increase. No person could be held responsible. </p>
<p>Given the <a href="https://www.hrw.org/sites/default/files/supporting_resources/11.2013_memo_to_ccw_delegates_fully_autonomous_weapons.pdf">moral, legal and accountability risks</a> of fully autonomous weapons, preempting their development, production and use cannot wait. The best way to handle this threat is an international, legally binding ban on weapons that lack meaningful human control.</p>
<h2>Preserving empathy and judgment</h2>
<p>At least <a href="http://www.unog.ch/80256EE600585943/%28httpPages%29/37D51189AC4FB6E1C1257F4D004CAFB2?OpenDocument">20 countries have expressed in U.N. meetings</a> the belief that humans should dictate the selection and engagement of targets. Many of them have echoed <a href="https://www.hrw.org/news/2016/04/11/killer-robots-and-concept-meaningful-human-control">arguments laid out in a new report</a>, of which I was the lead author. The report was released in April by <a href="http://www.hrw.org">Human Rights Watch</a> and the <a href="http://hrp.law.harvard.edu/">Harvard Law School International Human Rights Clinic</a>, two organizations that have been campaigning for a ban on fully autonomous weapons.</p>
<p>Retaining human control over weapons is a <a href="https://documents-dds-ny.un.org/doc/UNDOC/GEN/G13/127/76/PDF/G1312776.pdf?OpenElement">moral imperative</a>. Because they possess empathy, people can feel the emotional weight of harming another individual. Their respect for human dignity can – and should – serve as a check on killing. </p>
<p>Robots, by contrast, lack real emotions, including compassion. In addition, inanimate machines could not truly understand the value of any human life they chose to take. Allowing them to determine when to use force would undermine human dignity. </p>
<p>Human control also promotes compliance with international law, which is designed to protect civilians and soldiers alike. For example, the laws of war <a href="https://www.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=4BEBD9920AE0AEAEC12563CD0051DC9E">prohibit disproportionate attacks</a> in which expected civilian harm outweighs anticipated military advantage. Humans can apply their judgment, based on past experience and moral considerations, and make case-by-case determinations about proportionality. </p>
<p>It would be almost impossible, however, <a href="https://www.hrw.org/sites/default/files/related_material/Advancing%20the%20Debate_8May2014_Final.pdf">to replicate that judgment in fully autonomous weapons</a>, and they could not be preprogrammed to handle all scenarios. As a result, these weapons would be unable to act as “<a href="http://www.icty.org/sid/10052">reasonable commanders</a>,” the traditional legal standard for handling complex and unforeseeable situations. </p>
<p>In addition, the loss of human control would threaten a target’s <a href="https://www.hrw.org/sites/default/files/reports/arms0514_ForUpload_0.pdf">right not to be arbitrarily deprived of life</a>. Upholding this fundamental human right is an obligation during law enforcement as well as military operations. Judgment calls are required to assess the necessity of an attack, and humans are better positioned than machines to make them.</p>
<h2>Promoting accountability</h2>
<p>Keeping a human in the loop on decisions to use force further ensures that <a href="https://www.hrw.org/sites/default/files/reports/arms0415_ForUpload_0.pdf">accountability for unlawful acts</a> is possible. Under international criminal law, a human operator would in most cases escape liability for the harm caused by a weapon that acted independently. Unless he or she intentionally used a fully autonomous weapon to commit a crime, it would be unfair and legally problematic to hold the operator responsible for the actions of a robot that the operator could neither prevent nor punish.</p>
<p>There are additional obstacles to finding programmers and manufacturers of fully autonomous weapons liable under civil law, in which a victim files a lawsuit against an alleged wrongdoer. The United States, for example, establishes <a href="https://supreme.justia.com/cases/federal/us/487/500/case.html">immunity for most weapons manufacturers</a>. It also has high standards for proving a product was defective in a way that would make a manufacturer legally responsible. In any case, victims from other countries would likely lack the access and money to sue a foreign entity. The gap in accountability would weaken deterrence of unlawful acts and leave victims unsatisfied that someone was punished for their suffering. </p>
<h2>An opportunity to seize</h2>
<p>At a U.N. meeting in Geneva in April, <a href="http://www.reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2016/meeting-experts-laws/documents/DraftRecommendations_15April_final.pdf">94 countries recommended beginning formal discussions</a> about “lethal autonomous weapons systems.” The talks would consider whether these systems should be restricted under the <a href="http://www.unog.ch/80256EE600585943/%28httpPages%29/4F0DEF093B4860B4C1257180004B1B30?OpenDocument">Convention on Conventional Weapons</a>, a disarmament treaty that has regulated or banned several other types of weapons, including incendiary weapons and blinding lasers. The nations that have joined the treaty will meet in December for a review conference to set their agenda for future work. It is crucial that the members agree to start a formal process on lethal autonomous weapons systems in 2017.</p>
<p>Disarmament law provides precedent for requiring human control over weapons. For example, the international community adopted the widely accepted treaties banning <a href="https://www.icrc.org/ihl/INTRO/450?OpenDocument">biological weapons</a>, <a href="https://www.icrc.org/ihl/INTRO/553?OpenDocument">chemical weapons</a> and <a href="https://www.icrc.org/ihl/INTRO/580">landmines</a> in large part because of humans’ inability to exercise adequate control over their effects. Countries should now prohibit fully autonomous weapons, which would pose an equal or greater humanitarian risk.</p>
<p>At the December review conference, countries that have joined the Convention on Conventional Weapons should take concrete steps toward that goal. They should initiate negotiations of a new international agreement to address fully autonomous weapons, moving beyond general expressions of concern to specific action. They should set aside enough time in 2017 – at least several weeks – for substantive deliberations.</p>
<p>While the process of creating international law is notoriously slow, countries can move quickly to address the threats of fully autonomous weapons. They should seize the opportunity presented by the review conference because the alternative is unacceptable: Allowing technology to outpace diplomacy would produce dire and unparalleled humanitarian consequences.</p><img src="https://counter.theconversation.com/content/58262/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bonnie Docherty works as a senior researcher in the Arms Division of Human Rights Watch. </span></em></p>Machines that can target and kill people without human intervention or accountability pose a moral threat to the world.Bonnie Docherty, Lecturer on Law, Senior Clinical Instructor at Harvard Law School's International Human Rights Clinic, Harvard UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/577352016-04-13T20:13:31Z2016-04-13T20:13:31ZAustralia should take a stand against ‘killer robots’<figure><img src="https://images.theconversation.com/files/118514/original/image-20160413-18093-1d4vgmm.png?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Could killer robots like Maximilian from the 1979 film Black Hole become reality?</span> <span class="attribution"><span class="source">Walt Disney Productions</span></span></figcaption></figure><p>Lethal autonomous weapons (or <a href="https://theconversation.com/au/topics/killer-robots">killer robots</a> as the media likes to call them) are the subject of intense discussion in the corridors and committee rooms of the United Nations in Geneva this week.</p>
<p>The international talking shop is playing host to the <a href="http://goo.gl/JU1YJC">third round of multilateral talks</a> on this topic.</p>
<p>The meeting follows on from increasing concerns about the rapid progress being made in areas like artificial intelligence (<a href="https://theconversation.com/au/topics/artificial-intelligence">AI</a>) and <a href="https://theconversation.com/au/topics/robotics">robotics</a>. <a href="http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/">Stephen Hawking, Elon Musk, Bill Gates</a> and others have expressed concern about the direction these technologies may be taking us.</p>
<p>Last July, thousands of researchers working in AI and robotics came together and issued an open letter calling upon the UN to put a <a href="https://theconversation.com/open-letter-we-must-stop-killer-robots-before-they-are-built-44577">pre-emptive ban</a> in place on such weapons.</p>
<p>In the interests of disclosure, I helped put the letter together and will be <a href="http://www.stopkillerrobots.org/wp-content/uploads/2013/03/KRC_SideEventCCW_14April2016rv.pdf">talking at the UN</a> meeting on Thursday.</p>
<h2>Where will this end?</h2>
<p>If we don’t get a ban in place, the end point is clear to my colleagues and me: there will be an arms race and it will look much like the dystopian future painted by Hollywood movies like the Terminator series.</p>
<p>The technology will undoubtably fall into the hands of terrorists and rogue nations. These people will have no qualms about removing any safeguards in place on its use. Or using it against us.</p>
<p>Unfortunately, we won’t simply have robots fight robots. Wars today are asymmetric and it will be robots against humans. Any many of those humans will be innocent civilians.</p>
<p>This is a terrifying prospect.</p>
<h2>We don’t need to end there</h2>
<p>The world has come together in the past to decide not to weaponise a technology. We have bans on biological and chemical weapons. We have treaties to prevent the proliferation of nuclear weapons.</p>
<p>Most recently, we have collectively agreed to ban several technologies including <a href="https://www.icrc.org/ihl/INTRO/570">blinding lasers</a> and <a href="https://www.icrc.org/ihl/INTRO/580">anti-personnel mines</a>.</p>
<p>And whilst these bans have not been 100% effective, the world is undoubtedly a better place for their existence.</p>
<p>The treaties have also not prevented related technologies from being developed; you go into a hospital, and a “blinding” laser will be used to fix your eyes. But if you go to the battlefields of the world today, you will not find blinding lasers being used. And no arms company today will sell you one.</p>
<p>The same is likely to be true for autonomous weapons. We won’t stop the development of the broad technology. It’s much the same that will go into an autonomous car as an autonomous drone or submarine. </p>
<p>And we’ll definitely want <a href="https://theconversation.com/au/topics/autonomous-vehicles">autonomous cars</a>. One thousand people will die on the roads of Australia this year. These numbers will plummet once we have autonomous cars. Most accidents are the result of driver error.</p>
<p>But if we get an UN ban in place, we’ll not have autonomous weapons out in the battlefield. And this will be a good thing.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=429&fit=crop&dpr=1 600w, https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=429&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=429&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=539&fit=crop&dpr=1 754w, https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=539&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=539&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The General Atomics MQ-9 Reaper is already semi-autonomous, and similar combat aircraft could soon be fully autonomous.</span>
<span class="attribution"><a class="source" href="http://www.af.mil/shared/media/photodb/photos/071127-F-2185F-105.jpg">USAF Photographic Archives</a></span>
</figcaption>
</figure>
<h2>Come on Australia</h2>
<p>Australia has led the world in many discussions around disarmament. For instance, we have taken a leading role in <a href="http://www.defence.gov.au/foi/docs/disclosures/421_1213_Documents.pdf">nuclear non-proliferation</a>.</p>
<p>But we have taken a disappointing role so far in the UN discussions around autonomous weapons. Our <a href="http://goo.gl/LNoccK">official position</a> appears welcoming.</p>
<blockquote>
<p>The development of fully autonomous systems able to conduct military targeting operations which kill and injure combatants or civilians may be closer than many of us had imagined. It is an appropriate time to consider the risks of such weapon systems and to make sure we understand fully what might constitute misuse as well as legitimate use of emerging technologies.</p>
</blockquote>
<p>However, we are not helping the discussion with official statements like the following.</p>
<blockquote>
<p>If we were to settle, ultimately, on an agreement that there were limits to the autonomy that lethal weapons may possess, or that there were limits to the weaponisation of autonomous systems, we would also have to design ways, not just of defining, but of implementing, such limits, and of verifying compliance. We should not underestimate the complexity of this task.</p>
</blockquote>
<p>This is not just unhelpful but also wrong. There is no necessity to define ways to verify compliance. Almost no weapon banned by the UN has a compliance regime.</p>
<p>There is no international body to inspect for blinding lasers. Or anti-personnel mines. Even the grand-daddy of all weapon bans, the <a href="https://en.wikipedia.org/wiki/Biological_Weapons_Convention">1975 UN convention on biological weapons</a>, has no formal compliance measures beyond self-reporting by nation states and investigation by the UN Security Council (which has never occurred).</p>
<p>There is also no necessity to define limits on autonomy. For example, the 1998 UN Protocol on Blinding Laser Weapon does not formally define a limit on the wavelength or wattage of a “blinding” laser.</p>
<p>We can simply require that autonomous or semi-autonomous weapons must have “meaningful” human control. And depend on the consensus that will undoubtably emerge internationally as to what precisely this means.</p>
<h2>Let’s take the lead</h2>
<p>Australia is a world super power in AI and robotics. We punch well above our weight. We have some of the most automated ports and mines in the world. And we are currently reigning <a href="https://theconversation.com/how-we-won-the-world-robot-soccer-championship-45156">world champions at robot soccer</a>. Indeed, we have been world champions, so far, five times.</p>
<p>And from the reaction I have had <a href="http://www.abc.net.au/radionational/programs/bigideas/killer-robots/7266930">talking about this issue</a> in public, the general population here in Australia supports the view held by both me and thousands of my colleagues that a ban would be a good idea.</p>
<p>All technology can be used for good or bad. Australia should be taking a lead in pushing the world down a good path.</p><img src="https://counter.theconversation.com/content/57735/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh is co-authored the open letter calling for a ban on "killer robots" and is speaking in Geneva this week in support of a ban.</span></em></p>We need to ban lethal autonomous weapons, or “killer robots”, as we have done with biological weapons, land mines and blinding lasers, and Australia should take a leading role in making that happen.Toby Walsh, Professor of AI at UNSW, Research Group Leader, Data61Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/536412016-01-29T02:49:43Z2016-01-29T02:49:43ZWe need to keep humans in the loop when robots fight wars<figure><img src="https://images.theconversation.com/files/109429/original/image-20160128-26785-13pcn89.jpg?ixlib=rb-1.1.0&rect=114%2C32%2C1752%2C1089&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Who gets to fire the gun? Man or AI-powered machine?</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/clement127/12138092936/">Flickr/Robot flingueur</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span></figcaption></figure><p>Imagine a swarm of tens of millions of armed AI-piloted hexacopters, “killer robots” as <a href="http://stopkillerrobots.org">some call them</a>, sent to wipe out a particular group of people – say, all men of a certain age in a certain city.</p>
<p>Sounds like science fiction but it was a scenario raised by <a href="https://www.cs.berkeley.edu/%7Erussell/">Stuart Russell</a>, a professor of artificial intelligence (AI), as part of a <a href="http://www.weforum.org/events/world-economic-forum-annual-meeting-2016/sessions/what-if-robots-go-to-war">debate on robots in war</a> at the World Economic Forum in Switzerland last week.</p>
<p>This swarm, he claimed, could be developed in about 18 to 24 months with <a href="http://www.encyclopedia.com/topic/Manhattan_Project.aspx">Manhattan Project</a> style funding. One person could unleash a million weaponised AIs and humans would have virtually no defence. </p>
<p><a href="http://www.baesystems.com/en/our-company/our-people/board-of-directors/sir-roger-carr">Sir Roger Carr</a>, chairman of weapons manufacturer <a href="http://www.baesystems.com/en/home">BAE Systems</a>, tactfully described Russell’s vision as “extreme”.</p>
<p>But Sir Roger did come out strongly in favour of keeping humans in the loop in the design of autonomous weapons as a means of maintaining “meaningful human control”. An “umbilical cord” between a human and the machine was necessary, he said. Responsibility for the actions of the machine and compliance with the laws of war should be assigned to the human not the machine. </p>
<p>Carr said the weapons business is more heavily regulated that any other industry. He stressed it was not his role to be an advocate for equipment. Rather, his role was to build equipment to government specifications and requirements.</p>
<p>Even so, <a href="http://www.stopkillerrobots.org/2016/01/davos-2/">he was emphatic</a> that autonomous weapons would be “devoid of responsibility” and would have “no sense of emotion or mercy”. It would be a bad idea, he said, to build machines that decided “who to fight, how to fight and where to fight”.</p>
<h2>Humans in, on and off the lethal loop</h2>
<p>One of BAE’s research projects is a remotely piloted stealth fighter-bomber, <a href="http://www.baesystems.com/en/product/taranis">Taranis</a>. This could plausibly evolve into a “human off the loop” weapon – if the UK government specified that requirement.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/nG-TMhvZ1pU?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Look! No pilot on board.</span></figcaption>
</figure>
<p>There is always the risk that under combat conditions the satellite link from the human to the machine could fail. The “umbilical cord” could snap. It is not clear how Taranis would behave in this circumstance.</p>
<p>Would it loiter and await reestablishment of its signal? Would it return to base? What would it do if attacked? Such details will need to be clarified sooner or later. </p>
<p><a href="http://passblue.com/2015/03/02/angela-kane-is-leaving-the-un-in-a-political-shuffle/">Angela Kane</a>, a former UN High Representative for Disarmament Affairs, speaking in the debate, characterised progress in negotiations under the <a href="https://www.icrc.org/ihl/INTRO/500?OpenDocument">Convention on Certain Conventional Weapons</a> (CCW) as “glacial”. Definitions remain elusive.</p>
<p>After UN Expert Meetings in <a href="http://bit.ly/1nziBmV">2014</a> and <a href="http://bit.ly/1nziDeB">2015</a>, the meanings of “autonomous”, “fully autonomous” and “meaningful human control” remain disputed. </p>
<h2>Policy loop and firing loop</h2>
<p>There are two distinct areas in which one might want to assert “meaningful human control” of autonomous weapons:</p>
<ol>
<li>the definition of the policy rules that the autonomous weapon mechanically follows</li>
<li>the execution of those rules when firing.</li>
</ol>
<p>Current discussions focus on the latter – the execution of policy in the firing loop (select to engage). The widely accepted terms are “in the loop”, “on the loop” and “off the loop”. Let me explain how the three different terms apply in practice.</p>
<p>Contemporary drones are remote controlled. The robot does not decide to select or engage; a human telepilot does that. The Raytheon <a href="http://www.raytheon.com/capabilities/products/patriot/">Patriot</a> anti-missile system is a “human in the loop” system. Patriot can select a target (based on human defined rules) but will not engage until a human presses a button to confirm.</p>
<p>Raytheon’s <a href="http://www.raytheon.com/capabilities/products/phalanx/">Phalanx</a>, a defensive “close-in weapons system” (CIWS) designed to shoot down anti-ship missiles, can be an “on the loop” system. Once activated, it will select and engage targets. It will pop up an abort button for the human to hit but will fire if the human does not override the robot decision. </p>
<p>Mines are an example “off the loop” weapons. The human cannot abort and is not required to confirm a decision to detonate and kill. </p>
<p>If you take a standard robotics <a href="https://mitpress.mit.edu/books/autonomous-robots">textbook definition</a> of “autonomous” as referring to the ability of a system to function without an external human operator for a protracted period of time, then the oldest “autonomous” weapons are “off the loop”. For example, the Confederates used naval and land mines (known as “torpedoes” at that time) <a href="http://www.lat34north.com/HistoricMarkers/CivilWar/EventDetails.cfm?EventKey=18641213">during the American Civil War</a> (1861-65).</p>
<h2>Policy autonomy and firing autonomy</h2>
<p>Many people employ a more visionary notion of “autonomous”, namely the ability of a future AI to create or discover (i.e. initiate) the policy rules it will execute in its firing decisions via unsupervised machine learning and evolutionary game theory.</p>
<p>We might think of this as the policy loop. This runs before the firing loop of select and engage. Who or what makes the targeting rules is a critical element of control especially as robots, unlike humans, mechanically follow the rules in their programming. </p>
<p>Thus in addition to notions of remote control and humans being in, on and off the loop in firing, one might explore notions of human policy control and humans being in, on and off the loop of policy formation (i.e. initiating the rules that define who, where and how we fight).</p>
<p>Patriot has human policy control. Programmers key targeting rules into the system and on the basis of these rules Patriot selects targets. Thus initiating the targeting rules is an element of control.</p>
<p>The Skynet of Hollywood’s <a href="http://www.imdb.com/title/tt0088247/">Terminator</a> fiction, by contrast, exemplifies a robot that has no humans in its policy or firing loops. </p>
<p>Some non-military contemporary policy is “human in the loop” in that an AI computer model of climate might make policy recommendations but these can be reviewed and approved by humans. </p>
<p>What Carr was describing as objectionable was a machine that devised its own targeting rules (who, how and where to fight). A robot that follows targeting rules defined or approved by humans is more obviously closer to “meaningful human control” than a robot that initiates rules not subject to human review. </p>
<h2>Effective legal control</h2>
<p>If some autonomous weapons are to be permitted, it is critical that effective legal control is built into them such that they cannot perpetrate genocide and war crimes. Developing a swarm of cranium bombers to kill civilians is already a war crime and that use is already banned. </p>
<p>It is already the case that fielded autonomous weapons are subject to <a href="https://www.icrc.org/ihl/WebART/470-750045?OpenDocument">Article 36</a> legal review to ensure they can be operated in accordance with International Humanitarian Law. </p>
<p>There will be some exceptional cases where the human is in the policy loop and off the firing loop (e.g. anti-tank mines and naval mines that are long accepted weapons) and cases where battlespace tempo (fast moving enemy objects) require humans on the firing loop not in it once the system is activated (e.g. Phalanx).</p>
<p>Ideally, where battlespace tempo permits, there should be humans in both policy and firing loops. Taking humans out of the policy loop should be comprehensively and pre-emptively banned.</p><img src="https://counter.theconversation.com/content/53641/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sean Welsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>When it comes to weapons with artificial intelligence, there’s an argument for keeping a human in charge of some of the action.Sean Welsh, Doctoral Candidate in Robot Ethics, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/496452015-11-15T19:16:50Z2015-11-15T19:16:50ZYour questions answered on artificial intelligence<figure><img src="https://images.theconversation.com/files/100598/original/image-20151103-16547-1kawsyc.jpg?ixlib=rb-1.1.0&rect=1441%2C0%2C5219%2C3198&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Have questions about robots and artificial intelligence?</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p><em>Artificial intelligence and robotics have enjoyed a resurgence of interest, and there is renewed optimism about their place in our future. But what do they mean for us?</em></p>
<p><em>You submitted your questions about artificial intelligence and robotics, and we put them – and some of our own – to The Conversation’s experts.</em> </p>
<figure id="q_top"></figure>
<p><em>Here are your questions answered (scroll down or click on the links below):</em></p>
<ol>
<li><a href="#a_1">How plausible is human-like artificial intelligence, such as the kind often seen in films and TV?</a> </li>
<li><a href="#a_2">Automation is already replacing many jobs, from bank tellers to taxi drivers in the near future. Is it time to think about making laws to protect some of these industries?</a> </li>
<li><a href="#a_3">Where will AI be in five-to-ten years?</a> </li>
<li><a href="#a_4">Should we be concerned about military and other armed robots?</a> </li>
<li><a href="#a_5">How plausible is super-intelligent artificial intelligence?</a> </li>
<li><a href="#a_6">Given what little we know about our own minds, can we expect to intentionally create artificial consciousness?</a></li>
<li><a href="#a_7">How do cyborgs differ (technically or conceptually) from A.I.?</a></li>
<li><a href="#a_8">Are you generally optimistic or pessimistic about the long term future of artificial intelligence and its benefits for humanity?</a> </li>
</ol>
<hr>
<figure id="a_1"></figure>
<h2>Q1. How plausible is human-like artificial intelligence?</h2>
<p><strong>A. Toby Walsh, Professor of AI:</strong></p>
<p>It is 100% plausible that we’ll have human-like artificial intelligence.</p>
<p>I say this even though the human brain is the most complex system in the universe that we know of. There’s nothing approaching the complexity of the brain’s billions of neurons and trillions of connections. But there are also no physical laws we know of that would prevent us reproducing or exceeding its capabilities. </p>
<p><strong>A. Kevin Korb, Reader in Computer Science</strong></p>
<p>Popular AI from Issac Asimov to Steven Spielberg is plausible. What the question doesn’t address is: <em>when</em> will it be plausible? </p>
<p>Most AI researchers (including me) see little or no evidence of it coming anytime soon. Progress on the major AI challenges is slow, if real.</p>
<p>What I find less plausible than the AI in fiction is the emotional and moral lives of robots. They seem to be either unrealistically empty, such as the emotion-less <a href="http://www.startrek.com/database_article/data">Data</a> in Star Trek, or unrealistically human-identical or superior, such as the AI in Spike Jonze’s <a href="http://www.imdb.com/title/tt1798709/">Her</a>. </p>
<p>All three – emotion, ethics and intelligence – travel together, and are not genuinely possible in some form without the others, but fiction writers tend to treat them as separate. Plato’s Socrates made a similar mistake.</p>
<p><strong>A. Gary Lea, Researcher in Artificial Intelligence Regulation</strong></p>
<p>AI is not impossible, but the real issue is: “how like is like?” The answer probably lies in applied tests: the Turing test was already (arguably) passed in 2014 but there is also the coffee test (can an embodied AI walk into an unfamiliar house and make a cup of coffee?), the college degree test and the job test. </p>
<p>If AI systems could progressively pass all of those tests (plus whatever else the psychologists might think of), then we would be getting very close. Perhaps the ultimate challenge would be whether a suitably embodied AI could live among us as J. Average and go undetected for five years or so before declaring itself.</p>
<p><a href="#q_top">Back to top</a></p>
<hr>
<figure id="a_2"></figure>
<h2>Q2. Automation is already replacing many jobs. Is it time make laws to protect some of these industries?</h2>
<p><strong>A. Jonathan Roberts, Professor of Robotics</strong></p>
<p>Researchers at the University of Oxford published a now <a href="http://www.futuretech.ox.ac.uk/news-release-oxford-martin-school-study-shows-nearly-half-us-jobs-could-be-risk-computerisation">well cited paper in 2013</a> that ranked jobs in order of how feasible it was to computerise or automate them. They found that nearly half of jobs in the USA could be at risk from computerisation within 20 years. </p>
<p>This research was followed in 2014 by the viral video hit, <a href="https://www.youtube.com/watch?v=7Pq-S557XQU">Humans Need Not Apply</a>, which argued that many jobs will be replaced by robots or automated systems and that employment would be a major issue for humans in the future. </p>
<p>Of course, it is difficult to predict what will happen, as the reasons for replacing people with machines are not simply based around available technology. The major factor is actually the business case and the social attitudes and behaviour of people in particular markets. </p>
<p><strong>A. Rob Sparrow, Professor of Philosophy</strong></p>
<p>Advances in computing and robotic technologies are undoubtedly going to lead to the replacement of many jobs currently done by humans. I’m not convinced that we should be making laws to protect particular industries though. Rather, I think we should be doing two things.</p>
<p>First, we should be making sure that people are assured of a good standard of living and an opportunity to pursue meaningful projects even in a world in which many more jobs are being done by machines. After all, the idea that, in the future, machines would work so that human beings didn’t have to toil used to be a common theme in utopian thought.</p>
<p>When we accept that machines putting people out of work is bad, what we are really accepting is the idea that whether ordinary people have an income and access to activities that can give their lives meaning should be up to the wealthy, who may choose to employ them or not. Instead, we should be looking to redistribute the wealth generated by machines in order to reduce the need for people to work without thereby reducing the opportunities available to them to be doing things that they care about and gain value from.</p>
<p>Second, we should be protecting vulnerable people in our society from being treated worse by machines than they would be treated by human beings. With my mother, Linda Sparrow, I have argued that introducing robots into the aged care setting will most likely result in older people receiving a worse standard of treatment than they already do in the <a href="http://profiles.arts.monash.edu.au/rob-sparrow/download/InTheHandsOfMachines_ForWeb.pdf">aged care sector</a>. Prisoners and children are also groups who are vulnerable to suffering at the hands of robots introduced without their consent.</p>
<p><strong>A. Toby Walsh, Professor of AI:</strong></p>
<p>There are some big changes about to happen. The #1 job in the US today is truck driver. In 30 years time, most trucks will be autonomous. </p>
<p>How we cope with this change is a question not for technologists like myself but for society as a whole. History would suggest that protectionism is unlikely to work. We would, for instance, need every country in the world to sign up. </p>
<p>But there are other ways we can adjust to this brave new world. My vote would be to ensure we have an educated workforce that can adapt to the new jobs that technology create. </p>
<p>We need people to enter the workforce with skills for jobs that will exist in a couple of decades time when the technologies for these jobs have been invented. </p>
<p>We need to ensure that everyone benefits from the rising tide of technology, not just the owners of the robots. Perhaps we can all work less and share the economic benefits of automation? This is likely to require fundamental changes to our taxation and welfare system informed by the ideas of people like the economist Thomas Piketty. </p>
<p><strong>A. Kevin Korb, Reader in Computer Science</strong></p>
<p>Industrial protection and restriction are the wrong way to go. I’d rather we develop our technology so as to help solve some of our very real problems. That’s bound to bring with it economic dislocation, so a caring society will accommodate those who lose out because of it. </p>
<p>But there’s no reason we can’t address that with improving technology as long as we keep the oligarchs under control. And if we educate people for flexibility rather than to fit into a particular job, intelligent people will be able to cope with the dislocation.</p>
<p><strong>A. Jai Galliot, Defence Analyst</strong></p>
<p>The standard argument is that workers displaced by automation go on to find more meaningful work. However, this does not hold in all cases. </p>
<p>Think about someone who signed up with the Air Force to fly jets. These pilots may have spent their whole social, physical and psychological lives preparing or maintaining readiness to defend their nation and its people.</p>
<p>For service personnel, there are few higher-value jobs than serving one’s nation through rendering active military service on the battlefield, so this assurance of finding alternative and meaningful work in a more passive role is likely to be of little consolation to a displaced soldier. </p>
<p>Thinking beyond the military, we need to be concerned that the Foundation for Young Australians indicates that <a href="http://www.fya.org.au/2015/08/23/the-new-work-order-report/">as many as 60%</a> of today’s young people are being trained for jobs that will soon be transformed due to automation. </p>
<p>The sad fact of the matter is that one robot can replace many workers. The future of developed economies therefore depends on youth adapting to globalised and/or shared jobs that are increasingly complemented by automation within what will inevitably be an innovation and knowledge economy.</p>
<p><a href="#q_top">Back to top</a></p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/100597/original/image-20151103-16547-d06h0u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/100597/original/image-20151103-16547-d06h0u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/100597/original/image-20151103-16547-d06h0u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=305&fit=crop&dpr=1 600w, https://images.theconversation.com/files/100597/original/image-20151103-16547-d06h0u.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=305&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/100597/original/image-20151103-16547-d06h0u.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=305&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/100597/original/image-20151103-16547-d06h0u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=383&fit=crop&dpr=1 754w, https://images.theconversation.com/files/100597/original/image-20151103-16547-d06h0u.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=383&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/100597/original/image-20151103-16547-d06h0u.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=383&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<hr>
<figure id="a_3"></figure>
<h2>Q3. Where will AI be in five-to-ten years?</h2>
<p><strong>A. Toby Walsh, Professor of AI:</strong></p>
<p>AI will become the operating system of all our connected devices. Apps like Siri and Cortana will morph into the way we interact with the connected world.</p>
<p>AI will be the way we interact with our smarthphones, cars, fridges, central heating system and front door. We will be living in an always-on world.</p>
<p><strong>A. Jonathan Roberts, Professor of Robotics</strong></p>
<p>It is likely that in the next five to ten years we will see machine learning systems interact with us in the form of robots. The next large technology hurdle that must be overcome in robotics is to give them the power of sight. </p>
<p>This is a grand challenge and one that has filled the research careers of many thousands of robotics researchers over the past four or five decades. There is a growing feeling in the robotics community that machine learning using large datasets will finally crack some of the problems in enabling a robot to actually see. </p>
<p>Four universities have recently teamed up in Australia in an ARC funded <a href="http://www.couriermail.com.au/business/grant-helps-arc-centre-of-excellence-in-robotic-vision-make-star-wars-a-reality/story-fnihsps3-1226903121219">Centre of Excellence in Robotic Vision</a>. Their mission is to solve many of the problems that prevent robots seeing.</p>
<p><a href="#q_top">Back to top</a></p>
<hr>
<figure id="a_4"></figure>
<h2>Q4. Should we be concerned about military and other armed robots?</h2>
<p><strong>A. Rob Sparrow, Professor of Philosophy</strong></p>
<p>The <a href="http://profiles.arts.monash.edu.au/rob-sparrow/download/JustSayNoToDrones.pdf">last thing</a> humanity needs now is for many of its most talented engineers and roboticists to be working on machines for killing people. </p>
<p>Robotic weapons will greatly <a href="http://profiles.arts.monash.edu.au/rob-sparrow/download/rsparrow-ieeets-predators.pdf">lower the threshold of conflict</a>. They will make it easier for governments to start wars because they will hold out the illusion of being able to fight without taking any casualties. They will increase the risk of accidental war because militaries will deploy unmanned systems in high threat environments, where it would be too risky to place a human being, such as just outside a potential enemy’s airspace or deep sea ports. </p>
<p>In these circumstances, robots may even start wars without any human being having the chance to veto the decision. The use of autonomous robots to kill people threatens to further erode respect for human life.</p>
<p>It was for these reasons that, with several colleagues overseas, I co-founded the <a href="http://icrac.net/">International Committee for Robot Arms Control</a>, which has in turn supported the <a href="http://www.stopkillerrobots.org/">Campaign to Stop Killer Robots</a>.</p>
<p><strong>A. Toby Walsh, Professor of AI:</strong></p>
<p>“Killer robots” are the next revolution in warfare, after gunpowder and nuclear bombs. If we act now, we can perhaps get a ban in place and prevent an arms race to develop better and better killer robots.</p>
<p>A ban won’t uninvent the technology. It’s much the same technology that will go, for instance, into our autonomous cars. And autonomous cars will prevent the 1,000 or so deaths on the roads of Australia each year. </p>
<p>But a ban will associate enough stigma with the technology that arms companies won’t sell them, that arms companies won’t develop them to be better and better at killing humans. This has worked with a number of other weapon types in the past like blinding lasers. If we don’t put a ban in place, you can be sure that terrorists and rogue nations will use killer robots against us. </p>
<p>For those who argue that killer robots are already covered by existing humanitarian law, I profoundly disagree. We cannot correctly engineer them today not to cause excessive collateral damage. And in the future, when we can, there is little stopping them being hacked and made to behave unethically. Even used lawfully, they will be weapons of terror. </p>
<p>You can learn more about these issues by watching my TEDx talk on this topic.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/c277ynyRPgs?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p><strong>A. Sean Welsh, Researcher in Robot Ethics</strong></p>
<p>We should be concerned about military robots. However, we should not be under the illusion that there is no existing legislation that regulates weaponised robots.</p>
<p>There is no specific law that bans murdering with piano wire. There is simply a general law against murder. We do not need to ban piano wire to stop murders. Similarly, existing laws already forbid the use of any weapons to commit murder in peacetime and to cause unlawful deaths in wartime. </p>
<p>There is no need to ban autonomous weapons as a result of fears that they may be used unlawfully any more than there is a need to ban autonomous cars for fear they might be used illegally (as car bombs). The use of any weapon that is indiscriminate, disproportionate and causes unnecessary suffering is already unlawful under international humanitarian law.</p>
<p>Some advocate that autonomous weapons should be put in the same category as biological and chemical weapons. However, the main reason for bans on chemical and biological weapons is that they are inherently indiscriminate (cannot tell friend from foe from civilian) and cause unnecessary suffering (slow painful deaths). They have no humanitarian positives.</p>
<p>By contrast, there is no suggestion that “killer robots” (even in the examples given by opponents) will necessarily be indiscriminate or cause painful deaths. The increased precision and accuracy of robotic weapons systems compared to human operated ones is a key point in their favour. </p>
<p>If correctly engineered, they would be less likely to cause collateral damage to innocents than human operated weapons. Indeed robot weapons might be engineered so as to be more likely to capture rather than kill. Autonomous weapons do have potential humanitarian positives. </p>
<p><a href="#q_top">Back to top</a></p>
<hr>
<figure id="a_5"></figure>
<h2>Q5. How plausible is super-intelligent AI?</h2>
<p><strong>A. David Dowe, Associate Professor in Machine Learning and Artificial Intelligence</strong></p>
<p>We can look at the progress made at various tasks once said to be impossible for machines to do, and see them one by one gradually being achieved. For example: beating the human world chess champion (1997); winning at Jeopardy! (2011); driverless vehicles, which are now somewhat standard on mining sites; automated translation, etc. </p>
<p>And, insofar as intelligence test problems are a measure of <a href="https://theconversation.com/mammals-machines-and-mind-games-whos-the-smartest-566">intelligence</a>, I’ve <a href="http://www.sciencedirect.com/science/article/pii/S0004370215001538">recently looked</a> at how computers are performing on these tests.</p>
<p><strong>A. Rob Sparrow, Professor of Philosophy</strong></p>
<p>If there can be artificial intelligence then there can be super-intelligent artificial intelligences. There doesn’t seem to be any reason why entities other than human beings could not be intelligent. Nor does there seem to be any reason to think that highest human IQ represents the upper limit on intelligence.</p>
<p>If there is any danger of human beings creating such machines in the near future, we should be very scared. Think about how human beings treat rats. Why should machines that were as many times more intelligent than us, as we are more intelligent than rats, treat us any better?</p>
<p><a href="#q_top">Back to top</a></p>
<hr>
<figure id="a_6"></figure>
<h2>Q6. Given what little we know about our own minds, can we expect to intentionally create artificial consciousness?</h2>
<p><strong>A. Kevin Korb, Reader in Computer Science</strong></p>
<p>As a believer in functionalism, I believe it is possible to create artificial consciousness. It doesn’t follow that we can “expect” to do it, but only that we might. </p>
<p>John Searle’s arguments <a href="http://plato.stanford.edu/entries/chinese-room/">against the possibility of artificial consciousness</a> seem to confuse functional realisability with computational realisability. That is, it may well be (logically) impossible to “compute” consciousness, but that doesn’t mean that an embedded, functional computer cannot be conscious.</p>
<p><strong>A. Rob Sparrow, Professor of Philosophy</strong></p>
<p>A number of engineers, computer scientists, and science fiction authors argue that we are on the verge of creating artificial consciousness. They usually proceed by estimating the number of neurons in the human brain and pointing out that we will soon be able to build computers with a similar number of logic gates. </p>
<p>If you ask a psychologist or a psychiatrist, whose job it is to actually “fix” minds, I think you will likely get a very different answer. After all, the state-of-the-art treatment for severe depression still consists in shocking the brain with electricity, which looks remarkably like trying to fix a stalled car by pouring petrol over the top of the engine. So I’m sceptical that we understand enough about the mind to design one.</p>
<p><a href="#q_top">Back to top</a></p>
<hr>
<figure id="a_7"></figure>
<h2>Q7. How do cyborgs differ (technically or conceptually) from A.I.?</h2>
<p><strong>A. Katina Michael, Associate Professor in Information Systems</strong></p>
<p>A cyborg is a human-machine combination. By definition, a cyborg is any human who adds parts, or enhances his or her abilities by using technology. As we have advanced our technological capabilities, we have discovered that we can merge technology onto and into the human body for prosthesis and/or amplification. Thus, technology is no longer an extension of us, but “becomes” a part of us if we opt into that design. </p>
<p>In contrast, artificial intelligence is the capability of a computer system to learn from its experiences and simulate human intelligence in decision-making. A cyborg usually begins as a human and may undergo a transformational process, whereas artificial intelligence is imbued into a computer system itself predominantly in the form of software.</p>
<p>Some researchers have claimed that a cyborg can also begin in a humanoid robot and incorporate the living tissue of a human or other organism. Regardless, whether it is a human-to-machine or machine-to-organism coalescence, when AI is applied via silicon microchips or nanotechnology embedded into prosthetic forms like a dependent limb, a vital organ, or a replacement/additional sensory input, a human or piece of machinery is said to be a cyborg. </p>
<p>There are already early experiments with such cybernetics. In 1998 <a href="http://www.kevinwarwick.com/">Professor Kevin Warwick</a> named his first experiment Cyborg 1.0, surgically implanting a silicon chip transponder into his forearm. In 2002 in project Cyborg 2.0, Warwick had a one hundred electrode array surgically implanted into the median nerve fibres of his left arm.</p>
<p>Ultimately we need to be extremely careful that any artificial intelligence we invite into our bodies does not submerge the human consciousness and, in doing so, rule over it. </p>
<p><a href="#q_top">Back to top</a></p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/101200/original/image-20151109-16273-1b789tq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/101200/original/image-20151109-16273-1b789tq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/101200/original/image-20151109-16273-1b789tq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=477&fit=crop&dpr=1 600w, https://images.theconversation.com/files/101200/original/image-20151109-16273-1b789tq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=477&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/101200/original/image-20151109-16273-1b789tq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=477&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/101200/original/image-20151109-16273-1b789tq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=600&fit=crop&dpr=1 754w, https://images.theconversation.com/files/101200/original/image-20151109-16273-1b789tq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=600&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/101200/original/image-20151109-16273-1b789tq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=600&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Cybernetics is already with us.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<hr>
<figure id="a_8"></figure>
<h2>Q8. Are you generally optimistic or pessimistic about future of artificial intelligence and its benefits for humanity?</h2>
<p><strong>A. Toby Walsh, Professor of AI:</strong></p>
<p>I am both optimistic and pessimistic. AI is one of humankind’s truly revolutionary endeavours. It will transform our economies, our society and our position in the centre of this world. If we get this right, the world will be a much better place. We’ll all be healthier, wealthier and happier. </p>
<p>Of course, as with any technology, there are also bad paths we might end up following instead of the good ones. And unfortunately, humankind has a track record of late of following the bad paths. </p>
<p>We know global warming is coming but we seem unable not to follow this path. We know that terrorism is fracturing the world but we seem unable to prevent this. AI will also challenge our society in deep and fundamental ways. It will, for instance, completely change the nature of work. Science fiction will soon be science fact.</p>
<p><strong>A. Rob Sparrow, Professor of Philosophy</strong></p>
<p>I am generally pessimistic about the long term impact of artificial intelligence research on humanity. </p>
<p>I don’t want to deny that artificial intelligence has many benefits to offer, especially in supporting human beings to make better decisions and to pursue scientific goals that are currently beyond our reach. Investigating how brains work by trying to build machines that can do what they do is an interesting and worthwhile project in its own right. </p>
<p>However, there is a real danger that the systems that AI researchers come up with will mainly be used to further enrich the wealthy and to entrench the power of the powerful. </p>
<p>I also think there is a risk that the prospect of AI will allow people to delude themselves that we don’t need to do something about climate change now. It may also distract them from the fact that we already know what to do, but we lack the political will to do it. </p>
<p>Finally, even though I don’t think we’ve currently got much of a clue of how this might happen, if engineers do eventually succeed in creating genuine AIs that are smarter than we are, this might well be a species-level <a href="http://cser.org/">extinction threat</a>. </p>
<p><strong>A. Jonathan Roberts, Professor in Robotics</strong></p>
<p>I am generally optimistic about the long-term future of AI to humanity. I think that AI has the potential to radically change humanity and hence, if you don’t like change, you are not going to like the future. </p>
<p>I think that AI will revolutionise health care, especially diagnosis, and will enable the customisation of medicine to the individual. It is very possible that AI GPs and robot doctors will share their knowledge as they acquire it, creating a super doctor that will have access to all the medical data of the world. </p>
<p>I am also optimistic because humans tend to recognise when technology is having major negative consequences, and we eventually deal with it. Humans are in control and will naturally try and use technology to make a better world.</p>
<p><strong>A. Kevin Korb, Reader in Computer Science</strong></p>
<p>I’m pessimistic about the medium-term future of humanity. I think climate change and attendant dislocations, wars etc. may well massively disrupt science and technology. In that case progress on AI may stop. </p>
<p>If that doesn’t happen, then I think progress will continue and we’ll achieve AI in the long-term. Along the way, AI research will produce spin-offs that help economy and society, so I think as long as it exists AI tech will be important.</p>
<p><strong>A. Gary Lea, Researcher in Artificial Intelligence Regulation</strong></p>
<p>I suspect the long-term future for AI will turn out to be the usual mixed bag: some good, some bad. If scientists and engineers think sensibly about safety and public welfare when making their research, design and build choices (and provided there are suitable regulatory frameworks in place as a backstop), I think we should be okay. </p>
<p>So, on balance, I am cautiously optimistic on this front - but there are many other long-term existential risks for humanity.</p>
<p><a href="#q_top">Back to top</a></p><img src="https://counter.theconversation.com/content/49645/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh receives funding from the ARC, the Humboldt Foundation and AOARD.</span></em></p><p class="fine-print"><em><span> David Dowe receives funding from the Australian Research Council (<a href="http://www.ARC.gov.au">www.ARC.gov.au</a>) and (Cadability) InfoPlum Pty Ltd, and has received funding from the Spanish government's Explora-Ingenio scheme. As editor of the Solomonoff memorial conference (to which the AOARD and NICTA contributed support) proceedings, he may receive royalties from any copies sold. He is affiliated with Monash University.
Please note that David Dowe's contributions to this panel piece have subsequently been substantially edited.</span></em></p><p class="fine-print"><em><span>Jai Galliott receives funding from the Department of Defence and previously served as an officer of the Royal Australian Navy. He is affiliated with the Program on the Regulation of Emerging Military Technologies (PREMT) at the Melbourne Law School and a number of other groups examining the ethical, legal and social implications of emerging technologies. </span></em></p><p class="fine-print"><em><span>Jonathan Roberts is an Associate Investigator with the ARC Centre of Excellence for Robotic Vision.</span></em></p><p class="fine-print"><em><span>Katina Michael receives funding from the Australian Research Council (ARC). She is affiliated with the Institute of Electrical and Electronics Engineers (IEEE) and the Australian Privacy Foundation (APF).</span></em></p><p class="fine-print"><em><span>Kevin Korb is co-founder of Bayesian Intelligence Pty Ltd, which consults in applied Artificial Intelligence. He has received funding from the Australian Research Council to do research on Artificial Intelligence. And he is a Senior Member of the IEEE and chair of the Victorian IEEE Computational Intelligence Society. </span></em></p><p class="fine-print"><em><span>Robert Sparrow receives funding from the Australian Research Council. He is a member of the International Committee for Robot Arms Control.</span></em></p><p class="fine-print"><em><span>Gary Lea and Sean Welsh do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Is genuine artificial consciousness possible? Should we protect jobs from automation? Your questions on AI and robots answered here.Toby Walsh, Professor of AI, Research Group Leader, Optimisation Research Group , Data61David Dowe, Associate Professor, Clayton School of Information Technology, Monash UniversityGary Lea, Visiting Researcher in Artificial Intelligence Regulation, Australian National UniversityJai Galliott, Research Fellow in Indo-Pacific Defence, UNSW SydneyJonathan Roberts, Professor in Robotics, Queensland University of TechnologyKatina Michael, Associate Professor, School of Information Systems and Technology, University of WollongongKevin Korb, Reader in Computer Science, Monash UniversityRobert Sparrow, Professor, Department of Philosophy; Adjunct Professor, Centre for Human Bioethics, Monash UniversitySean Welsh, Doctoral Candidate in Robot Ethics, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/470242015-09-09T15:07:51Z2015-09-09T15:07:51ZThe age of drones has arrived quicker than the laws that govern them<p>Just because you may not have seen a drone overhead doesn’t mean it hasn’t seen you. And, as was demonstrated by the <a href="http://www.bbc.co.uk/news/uk-34181475">killing of two British jihadis</a> in Syria recently, these unmanned aerial vehicles are increasingly deployed by the West as frontline weapons of war. </p>
<p>Drones are set to become a defining feature of this century. Thousands are already in operation in most developed countries worldwide – and that is likely to grow to hundreds of thousands as drones of different shapes and sizes are deployed by the media, emergency services, scientists, farmers, sports enthusiasts, hobbyists, photographers, the armed forces and government agencies.</p>
<p>Eventually commercial uses will dwarf all others. Amazon promises to deliver purchases within 30 minutes via delivery drones. Domino’s Pizza has staged hot pizza drone delivery. More than <a href="http://www.auvsi.org/blogs/auvsi-membership/2015/07/31/section333report2">20 industries are approved to fly commercial drones</a> in the US alone, and developing countries are following suit.</p>
<p>The question is, is this boom in drones moving faster than the law? How to fit such a proliferation of drones into the current regulations? The answers will need to be written into national and international laws quickly in order to govern an increasingly busy airspace. Many existing laws may need to be tweaked, including those governing cyber-security, stalking, privacy and human rights legislation, insurance, contract and commercial law, even the laws of war.</p>
<p>There have after all been numerous suspect or dangerous uses of drones already. For example, <a href="http://www.france24.com/en/20150129-france-civilian-drone-legislation-lessons-usa-obama">illegal flights over seven nuclear plants across France</a>, disruption to <a href="http://edition.cnn.com/2015/07/18/us/california-freeway-fire/">US forest fire-fighting</a>, and seven <a href="http://www.theguardian.com/world/2014/dec/07/drone-near-miss-passenger-plane-heathrow">near-misses at airports in the UK</a>. In the US several landowners have <a href="http://www.wired.co.uk/news/archive/2015-07/31/american-shoots-drone-in-garden">shot them down</a>, leading to court cases that pit claims of trespass and the right to privacy against criminal damage.</p>
<h2>Piecemeal legal changes not enough</h2>
<p>French legislators responded swiftly with a police order to the first balloon flights by the Montgolfier Brothers in 1784 by <a href="http://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=1557232&fileOId=1564253">prohibiting all flights over Paris</a> without prior authorisation. In the same way sovereign states ought to define precisely how and when they will permit drone flights over their territory. So far legal development to govern drone use has been very piecemeal; most countries have done nothing yet. </p>
<p>Again, it was France that was first to introduce dedicated legislation governing drones through a decree in 2012 bringing drones within its civil aviation regulations. Drones are allowed to fly between 50-150 metres from the ground and there are penalties up to five years in prison and fines of 75,000 euros for <a href="http://www.france24.com/en/20150129-france-civilian-drone-legislation-lessons-usa-obama">unlawful use of a drone</a>.</p>
<p>Also in 2012, the US congress passed the Federal Aviation Authority (FAA) <a href="http://www.gpo.gov/fdsys/pkg/CRPT-112hrpt381/pdf/CRPT-112hrpt381.pdf">Modernisation and Reform Act</a> which required the <a href="http://harvardnsj.org/2015/02/drones-in-the-u-s-national-airspace-system-a-safety-and-security-assessment/">integration of civil drones into national airspace</a> by the end of September 2015.</p>
<p>The Italian Civil Aviation Authority issued <a href="http://www.enac.gov.it/repository/ContentManagement/information/N122671512/Regolamento_APR_ed.1.pdf">commercial drone regulations</a> in April 2014. The law vaguely requires the operators to comply with data-protection laws and hold insurance. In the meantime, controversial Italian surveillance firm Hacking Team is already developing drones capable of <a href="https://www.rt.com/news/310493-wifi-hacking-spy-drones">delivering spyware to computers and smartphones</a>, infecting them via Wi-Fi.</p>
<p>However, the UK has struggled with fitting drones into its legal framework as neither the Civil Aviation Act nor the Air Navigation Order provide a good fit. The Department for Transport recently announced plans to introduce <a href="http://www.thetimes.co.uk/tto/technology/article4546793.ece">fines of up to £2,500 for flying drones in built-up areas</a>. Some clarity is provided by the CAA’s <a href="https://www.caa.co.uk/application.aspx?catid=33&pagetype=65&appid=11&mode=detail&id=415">Unmanned Aircraft System Operations in UK Airspace Guidance</a>, which requires drone pilots to maintain direct line of sight with drones and limits their altitude to a maximum 120 metres. Small drones must avoid and give way to manned aircraft at all times.</p>
<h2>Flying in the face of the law</h2>
<p>The International Civil Aviation Organisation (ICAO) plans to introduce policies to regulate civilian drone flight by 2028 worldwide. The EU and US have signed a formal agreement to cooperate on integrating drones into civil air traffic management. But consensus is notoriously difficult in international regulation – it could take decades to achieve a global agreement. </p>
<p>The last international civil air treaty of note is the <a href="http://www.icao.int/publications/pages/doc7300.aspx">1944 Chicago convention</a>. This impressive treaty created the standards for the <a href="https://www.routledge.com/products/9780415562126">common use of airspace between nations</a>. For example, that every nation has sovereignty over its airspace and that no aircraft operated by the state (such as military or police) will fly over other states without authorisation. It also required nations air regulations to be obeyed and required aircraft to be registered and display their registration marks.</p>
<p>For drones, however, it’s not clear what types and sizes of drones are required to be registered and display their nationality. There are drones the size of small birds or even coins that can fly across national borders in near-invisibility, upsetting these egalitarian rules. It’s vital these issues are comprehensively dealt with quickly in a new treaty, in the same spirit of egalitarianism as at Chicago in 1944.</p>
<h2>Dronefare versus Lawfare</h2>
<p>For many, drones are typified by their use for military operations in Afghanistan and Iraq. The US is reported to have up to <a href="http://nation.time.com/2011/06/26/the-new-u-s-smalls-air-force-over-afghanistan/">7,000 drones in Afghanistan</a>, with the main source of funding for developing modern drone technology coming from the military. </p>
<p>Successive US governments’ policies of conducting drone assassinations will perhaps go down as one of the most egregious use of air power in human history, with thousands of lives lost in an <a href="http://www.tandfonline.com/doi/abs/10.1080/14650045.2012.749241?src=recsys">amorphous conflict</a> against vaguely-defined al-Qaeda and ISIS “affiliates”. The only legitimisation in most cases appears to be White House’s early morning bureaucratic meetings. </p>
<p>The American Civil Liberties Union has correctly addressed this, stating: “The <a href="http://dronewars.net/drones-and-targeted-killing">targeted killing program</a> itself is not just unlawful but dangerous … it is dangerous to characterise the entire planet as a battlefield.” A recent RAF strike was the first targeted <a href="http://www.bbc.co.uk/news/uk-34181475">UK drone attack</a> on a British citizen. However, UK armed Reaper drones have accounted for <a href="http://dronewars.net/2015/01/14/uk-airstrikes-in-iraq-hit-100-one-third-by-drones/">up to a third of the 100 airstrikes</a>) in Iraq alone as at January 2015.</p>
<p>Lethal drone technology is going to be available to nearly all countries in a very short time. The possibility that even a few dozen states might follow the path beaten by the US is really a scary proposition – as what goes around may fly around.</p><img src="https://counter.theconversation.com/content/47024/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Gbenga Oduntan is author of Sovereignty and Jurisdiction in Airspace and Outer Space: Legal Criteria for Spatial Delimitation published with Routledge in 2012.
<a href="https://www.routledge.com/products/9780415562126">https://www.routledge.com/products/9780415562126</a>
</span></em></p>Drones are here, carrying cameras, delivering packages and even toting guns. But the laws to govern their use are way behind.Gbenga Oduntan, Senior Lecturer in International Commercial Law, University of KentLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/459352015-08-13T20:29:47Z2015-08-13T20:29:47ZWe should not dismiss the dangers of ‘killer robots’ so quickly<figure><img src="https://images.theconversation.com/files/91386/original/image-20150811-11091-jtwix5.jpg?ixlib=rb-1.1.0&rect=64%2C39%2C1490%2C993&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The military robots in Marvel's Iron Man 2 might not be so far from reality.</span> <span class="attribution"><span class="source">Marvel Studios/Paramount Pictures</span></span></figcaption></figure><p>In an <a href="http://futureoflife.org/AI/open_letter_autonomous_weapons">open letter</a> I helped publish on July 28 – which has now been signed by more than 2,700 artificial intelligence (AI) and robotics researchers from around the world – we stated that “starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control”.</p>
<p>A few days later, philosopher <a href="http://www.mq.edu.au/about_us/faculties_and_departments/faculty_of_arts/department_of_philosophy/future_students/postgraduate/current_research_students/jai_galliott/">Jai Galliott</a> challenged the notion of a ban, recommending instead that we <a href="https://theconversation.com/why-we-should-welcome-killer-robots-not-ban-them-45321">welcome offensive autonomous weapons</a> – often called “killer robots” – rather than ban them. </p>
<p>I was pleased to read Jai’s recommendation, even if he calls the open letter I helped instigate “misguided” and “reckless”, and even if I <a href="https://theconversation.com/open-letter-we-must-stop-killer-robots-before-they-are-built-44577">disagree with him profoundly</a>.</p>
<p>This is a complex and multi-faceted problem, and it is worth considering his arguments in detail as they bring several important issues into focus. </p>
<h2>Four points</h2>
<p>Jai puts forward four arguments why a ban is not needed:</p>
<ol>
<li><p>No robot can really kill without human intervention</p></li>
<li><p>We already have weapons of the kind for which a ban is sought</p></li>
<li><p>The real worry is the development of sentient robots, and</p></li>
<li><p>UN bans are virtually useless.</p></li>
</ol>
<p>Let’s consider the claims in turn. </p>
<p>The first argument is that robots cannot kill without human intervention. This is false. The <a href="http://www.globalsecurity.org/military/world/rok/sgr-a1.htm">Samsung SGR-A1 sentry robot</a> being used today in the Korean DMZ has an automatic mode. When in this mode, it will identify and kill targets up to four kilometres away without human intervention. If you are in the DMZ, it will track you and – unless you unambiguously raise your hands in surrender – it will kill you. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/pMkV8E2re9U?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The Samsung SGR-A1 can run in autonomous mode, and has already been deployed in South Korea.</span></figcaption>
</figure>
<p>The second argument is that we already have weapons of the kind for which a ban is sought. To illustrate this, he mentions the <a href="http://www.raytheon.com/capabilities/products/phalanx/">Phalanx close-in weapon system</a> used by the Australian Navy. This completely misses the point, as the Phalanx is a defensive weapon system. Our open letter specifically called only for a ban on <em>offensive</em> weapon systems. We have nothing against defensive weapons. </p>
<p>However, whether the weapons we seek to ban exist or not is irrelevant to our core argument that they <em>ought</em> to be banned. Anti-personnel mines existed before a ban was put in place with the <a href="https://www.icrc.org/ihl/INTRO/580">Ottawa Treaty</a>. And 46 million such mines have since been destroyed.</p>
<p>Blinding lasers had been developed by both China and the US before the <a href="https://www.icrc.org/ihl/INTRO/570">UN ban</a> was put in place in 1998. And blinding lasers are not in use in the Syria or any other battlefield around the world today. </p>
<p>So whether or not you believe offensive autonomous weapons already exist, it doesn’t undermine our our call for a ban. </p>
<p>The third argument is that the real worry is the development of sentient robots. This is also false. We do not discuss sentient weapons at all. Our call for a ban is independent of whether robots ever gain sentience. </p>
<p>Sentient robots like Hollywood’s Terminator would be a very bad thing. Even stupid AI in killer robots that are non-sentient would be a very bad thing. We need a ban today to protect mankind from swarms of armed quadcopters, technology that is practically on the shelves of hardware stores today. </p>
<p>The final argument claims UN bans are virtually useless. This also is false. The UN has very successfully banned <a href="http://www.un.org/disarmament/WMD/Bio/">biological weapons</a>, <a href="http://www.unoosa.org/oosa/en/ourwork/spacelaw/treaties/introouterspacetreaty.html">space-based nuclear weapons</a>, and <a href="https://www.icrc.org/ihl/INTRO/570">blinding laser weapons</a>. And even for arms such as <a href="https://www.opcw.org/chemical-weapons-convention/">chemical weapons</a>, <a href="https://www.icrc.org/ihl/INTRO/580">land mines</a>, and <a href="https://www.icrc.org/applic/ihl/ihl.nsf/INTRO/620?OpenDocument">cluster munitions</a>, where UN bans have been breached or not universally ratified, severe stigmatisation has limited their use. UN bans are thus definitely worth having. </p>
<h2>What’s the endpoint?</h2>
<p>What I view as the central weakness of the arguments advanced in Jai’s article is that they never addresses the main argument of the open letter: that the endpoint of an AI arms race will be disastrous for humanity. </p>
<p>The open letter proposes a solution: attempting to stop the arms race with an arms control agreement. </p>
<p>The position Jai takes, on the other hand, suggests we should welcome the development of offensive autonomous weapons. Yet it fails to describe what endpoint this will lead to. </p>
<p>It also never attempts to explain why a ban is supported by thousands of AI and robotics experts, by the ambassadors of Germany and Japan, by the International Committee of the Red Cross, by the editorial pages of the Financial Times, and indeed (for the time being) by the US Department of Defense, other than with a dismissive remark about “scaremongering”. </p>
<p>Anybody criticising an arms-control proposal endorsed by such a diverse and serious-minded collection of people and organisations needs to explain clearly what endpoint they are proposing instead, and should not advance arguments against a ban that are either false or irrelevant to the issue.</p><img src="https://counter.theconversation.com/content/45935/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh receives funding from the Australian Research Council, the Department of Communications, the Asian Office of Aerospace Research and Development and the Humboldt Foundation.</span></em></p>Some have argued we should not ban but embrace offensive autonomous weapons, or ‘killer robots’. But the arguments against a ban are weak.Toby Walsh, Professor, Research Group Leader, Optimisation Research Group , Data61Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/453672015-08-02T20:08:18Z2015-08-02T20:08:18ZAI researchers should not retreat from battlefield robots, they should engage them head-on<figure><img src="https://images.theconversation.com/files/90292/original/image-20150730-25753-ml8e15.jpg?ixlib=rb-1.1.0&rect=1152%2C27%2C4232%2C3485&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">AI researchers should work to make future battlefield robots more ethical.</span> <span class="attribution"><span class="source">Sandia Labs/Flickr</span>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span></figcaption></figure><p>There are now over <a href="http://futureoflife.org/AI/open_letter_autonomous_weapons">2,400</a> artificial intelligence (AI) and robotics researchers who have signed an <a href="theconversation.com/open-letter-we-must-stop-killer-robots-before-they-are-built-44577">open letter</a> calling for autonomous weapons – often dubbed “killer robots” – to be banned.</p>
<p>They cite a number of concerns about autonomous weapons, particularly advancing a version of the proliferation argument, which states that military robots will proliferate and lead to destabilising arms races and more conflict around the world.</p>
<p>However, the open letter not only misses out on a number of other concerns raised about autonomous weapons, but it effectively argues that AI and robotics researchers ought to retreat from work on autonomous weapons entirely.</p>
<p>Rather, I think there are very good reasons why AI and robotics researchers ought to engage with autonomous weapons head-on, not least to help make them behave more ethically and to improve our understanding of moral cognition.</p>
<h2>In the letter</h2>
<p>The open letter argues that the development of autonomous weapons could lead to a serious proliferation problem:</p>
<blockquote>
<p>If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.</p>
</blockquote>
<p>It re-affirms the standard policy conclusion of the <a href="http://stopkillerrobots.org">Campaign to Stop Killer Robots</a>: autonomous weapons should be banned.</p>
<p>There is an interesting difference in the open letter’s position, though: the ban in the letter is only on <em>offensive</em> autonomous weapons. <em>Defensive</em> autonomous weapons are, apparently, acceptable. It seems the campaign has done the numbers and accepted that NATO, the Gulf Emirates, Israel, Saudi Arabia, Japan and South Korea have spent billions on defensive autonomous weapons like <a href="http://www.raytheon.com/capabilities/products/patriot/">Patriot</a>, <a href="http://www.raytheon.com/capabilities/products/phalanx/">Phalanx and C-RAM</a> and will not support these being banned.</p>
<p>The letter also “airbrushes” AI history to support a second policy conclusion: that AI researchers should not sully their hands with the blood and guts of military AI but keep their discipline pristine lest it be “tarnished” by war. </p>
<p>To be blunt, if military applications “tarnish” a field, then AI was born tarnished. It was sired by <a href="https://theconversation.com/au/topics/alan-turing">Alan Turing</a> in Bletchley Park to <a href="http://www.colossus-computer.com/contents.htm">serve signals intelligence</a>. Turing built his celebrated machine for military ends. </p>
<p>Perhaps <a href="http://www.pcmag.com/encyclopedia/term/52614/tcp-ip">TCP/IP</a> – which underpins the entire internet today – is equally “tarnished” because it was born to facilitate continuity of <a href="https://en.wikipedia.org/wiki/ARPANET">command and control of nuclear missiles</a>. </p>
<p>The “global AI arms race” started in the 1940s and has never stopped. And yet society reaps rich benefits from civilian applications of technologies originally designed for military purposes. </p>
<p>Martin Seligman’s work on positive psychology, <a href="http://books.simonandschuster.com/Flourish/Martin-E-P-Seligman/9781439190760">Flourish</a>, has been <a href="http://www.pbs.org/newshour/bb/health-july-dec11-ptsd_12-14/">funded by the US Army</a>, yet we don’t generally consider it to be “tarnished”. Likewise, Daniel Kahneman’s <a href="http://us.macmillan.com/thinkingfastandslow/danielkahneman">research</a> into “System 1” (fast) and “System 2” (slow) was partly funded and inspired by the <a href="http://www.businessinsider.com.au/israeli-army-inspired-illusion-of-validity-2012-12">Israeli Defence Force</a>, yet it has made a terrific contribution to helping us understand how we think.</p>
<h2>Not in the letter</h2>
<p>Yet there are also other reasons why we might be concerned about autonomous weapons which are not covered in the open letter:</p>
<ul>
<li><p>Robots cannot discriminate targets (i.e. distinguish between friend, foe and civilian)</p></li>
<li><p>Robots cannot calculate proportionality (e.g. figure out if killing a high value terrorist target is worth risking the deaths of innocent children)</p></li>
<li><p>Robots cannot be held responsible for their actions (because they are not genuine moral agents with free will that can choose their actions and be praised, blamed or punished for them) so there is an “accountability gap” or “responsibility gap”</p></li>
<li><p>Robots will lower the enter cost of war and make wars and conflicts more likely</p></li>
<li><p>Robots will exacerbate the decline of martial valour already started with the use of remotely piloted vehicles </p></li>
<li><p>Robots should not make the decision to kill people. </p></li>
</ul>
<p>These are serious issues. However, they are the kinds of issues that AI and robotics researchers are perfectly placed to tackle. They are precisely the ones with the expertise to develop solutions that might make autonomous weapons able to reduce human casualties on future battlefields by <a href="http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/we-should-not-ban-killer-robots">making them more ethical</a>.</p>
<p>It is for this reason that I did not sign the open letter.</p>
<h2>AI should pay more attention to the ADF manual</h2>
<p>Using Kahneman’s terminology, rather succumb to the “fast” intuitive appeal to the System 1 of their moral cognition, AI and robotics researchers should engage their “slow” System 2 and work on the deep problem of moral cognition in combat.</p>
<p>The <a href="http://www.defence.gov.au/adfwc/documents/doctrinelibrary/addp/addp06.4-lawofarmedconflict.pdf">Australian Defence Force manual</a> is a good place to start. It is better than the American manuals because Australia has actually signed the <a href="https://www.icrc.org/applic/ihl/ihl.nsf/INTRO/470">Additional Protocols to the Geneva Conventions</a> and the <a href="https://www.icrc.org/ihl/INTRO/500?OpenDocument">Convention on Certain Conventional Weapons</a>, and the various other major treaties of <a href="https://www.icrc.org/eng/assets/files/other/what_is_ihl.pdf">International Humanitarian Law</a>. The United States, alas, has not. </p>
<p>AI researchers should read this manual. There is far more to military life than the decision to shoot the enemy when necessary. There are obligations to mitigate and minimise the calamities of war, obligations to uphold human rights and to protect cultural property. Sometimes cognitive agents with weapons must decide to fire to defend these rights and obligations. </p>
<p>Having read the manual, AI (and related fields) should research what functions required of service personnel can be automated. They should research the risks of automation, and recognise the failures of “meaningful human control”. </p>
<p>We need them to research the ethics of risk transfer, risk imposition and risk elimination, and figure out how to represent and process such normative data in a machine. </p>
<p>Or, if they prove it cannot be done, then design a better machine that <em>can</em> do it. They should decide whether humans or robots are better options for the battlefield as technology advances, and contemplate the psychological damage done by combat even to “cubicle warriors”. </p>
<p>We can then make policy decisions on the basis of hard data derived from experiments.</p>
<p>AI should thus not withdraw, but engage in the challenge of ethical cognition in the military robotics front line.</p><img src="https://counter.theconversation.com/content/45367/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sean Welsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>If military robots are inevitable, then AI and robotics researchers should work to make them ethical, not retreat by calling for an ineffectual ban.Sean Welsh, Doctoral Candidate in Robot Ethics, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/453212015-07-30T04:17:07Z2015-07-30T04:17:07ZWhy we should welcome ‘killer robots’, not ban them<figure><img src="https://images.theconversation.com/files/90236/original/image-20150730-10358-e9x6jv.jpg?ixlib=rb-1.1.0&rect=513%2C114%2C3030%2C2180&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A ban on killer robots is useless if your enemy doesn't play by the rules.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/fordan/3806294926/">Flickr/Bob Snyder</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>The <a href="https://theconversation.com/open-letter-we-must-stop-killer-robots-before-they-are-built-44577">open letter</a> signed by more than <a href="http://futureoflife.org/AI/open_letter_autonomous_weapons">12,000 prominent people</a> calling for a ban on artificially intelligent killer robots, connected to arguments for a UN ban on the same, is misguided and perhaps even reckless.</p>
<p>Wait, misguided? Reckless? Let me offer some context. I am a robotics researcher and have spent much of my career reading and <a href="http://www.ashgate.com/isbn/9781472426642">writing about military robots</a>, fuelling the very scare campaign that I now vehemently oppose.</p>
<p>I was even one of the <a href="http://icrac.net/network/">hundreds of people</a> who, in the early days of the debate, gave their support to the International Committee for Robot Arms Control (<a href="http://icrac.net/">ICRAC</a>) and the <a href="http://www.stopkillerrobots.org/">Campaign to Stop Killer Robots</a>.</p>
<p>But I’ve changed my mind. </p>
<p>Why the radical change in opinion? In short, I came to realise the following.</p>
<h2>The human connection</h2>
<p>The signatories are just scaremongers who are trying to ban autonomous weapons that “select and engage targets without human intervention”, which they say will be coming to a battlefield near you within “years, not decades”.</p>
<p>But, when you think about it critically, no robot can really kill without human intervention. Yes, robots are probably already capable of killing people using sophisticated mechanisms that <em>resemble</em> those used by humans, meaning that humans don’t necessarily need to oversee a lethal system while it is in use. But that doesn’t mean that there is no human in the loop.</p>
<p>We can model the brain, human learning and decision making to the point that these systems seem capable of generating creative solutions to killing people, but humans are very much involved in this process.</p>
<p>Indeed, it would be preposterous to overlook the role of programmers, cognitive scientists, engineers and others involved in building these autonomous systems. And even if we did, what of the commander, military force and government that made the decision to use the system? Should we overlook them, too?</p>
<h2>We already have automatic killing machines</h2>
<p>We already have weapons of the kind for which a ban is sought.</p>
<p>The Australian Navy, for instance, has successfully deployed highly automated weapons in the form of close-in weapons systems (<a href="https://www.navy.gov.au/fleet/weapons/anti-missile-and-ciws">CIWS</a>) for many years. These systems are essentially guns that can fire thousands of rounds of ammunition per minute, either autonomously via a computer-controlled system or under manual control, and are designed to provide surface vessels with a last defence against anti-ship missiles.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/lrre7STy5Pw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The Phalanx is just one of several close-in weapon systems used by the Australian Navy.</span></figcaption>
</figure>
<p>When engaged autonomously, CIWSs perform functions normally performed by other systems and people, including search, detection, threat assessment, acquisition, targeting and target destruction.</p>
<p>This system would fall under the definition provided in the open letter if we were to follow the signatories’ logic. But you don’t hear of anyone objecting to these systems. Why? Because they’re employed far out at sea and only in cases where an object is approaching in a hostile fashion, usually descending in the direction of the ship at rapid speed.</p>
<p>That is, they’re employed only in environments and contexts whereby the risk of killing an innocent civilian is virtually nil, much less than in regular combat.</p>
<p>So why can’t we focus on existing laws, which stipulate that they be used in the most particular and narrow circumstances?</p>
<h2>The real fear is of non-existent thinking robots</h2>
<p>It seems that the <em>real</em> worry that has motivated many of the 12,000-plus individuals to sign the anti-killer-robot petition is not about machines that select and engage targets without human intervention, but rather the development of sentient robots.</p>
<p>Given the advances in technology over the past century, it is tempting to fear thinking robots. We did leap from the first powered flight to space flight in less than 70 years, so why can’t we create a truly intelligent robot (or just one that’s too autonomous to hold a human responsible but not autonomous enough to hold the robot itself responsible) if we have a bit more time?</p>
<p>There are a number of good reasons why this will never happen. One explanation might be that we have a soul that simply can’t be replicated by a machine. While this tends to be the favourite of spiritual types, there are other natural explanations. For instance, there is a logical argument to suggest that certain brain processes are not computational or algorithmic in nature and thus <a href="https://www.newscientist.com/article/dn25560-sentient-robots-not-possible-if-you-do-the-maths/">impossible to truly replicate</a>.</p>
<p>Once people understand that any system we can conceive of today – whether or not it is capable of learning or highly complex operation – is the product of programming and artificial intelligence programs that trace back to its programmers and system designers, and that we’ll never have genuine thinking robots, it should become clear that the argument for a total ban on killer robots rests on shaky ground.</p>
<h2>Who plays by the rules?</h2>
<p>UN bans are also virtually useless. Just ask anyone who’s lost a leg to a recently laid <a href="http://www.un.org/disarmament/convarms/landmines/">anti-personnel mine</a>. The sad fact of the matter is that “bad guys” don’t play by the rules.</p>
<p>Now that you understand why I changed my mind, I invite the signatories to the killer robot petition to note these points, reconsider their position and join me on the “dark side” in arguing for more effective and practical regulation of what are really just highly automated systems.</p><img src="https://counter.theconversation.com/content/45321/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jai Galliott has received funding from the Department of Defence, but the views expressed here are his alone and do not represent those of the Department or the Commonwealth.</span></em></p>The thousands of people who signed an open letter calling for a ban on autonomous killer weapons and robots are misguided. We already have such killing machines and we should embrace them.Jai Galliott, Research Fellow in Indo-Pacific Defence, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/445772015-07-27T20:08:37Z2015-07-27T20:08:37ZOpen letter: we must stop killer robots before they are built<figure><img src="https://images.theconversation.com/files/89761/original/image-20150727-1364-kj74te.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Science fiction abounds with warnings concerning offensive autonomous weapons, or 'killer robots'.</span> <span class="attribution"><span class="source">superde1uxe/Flickr</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>More than 1,000 of the leading researchers in artificial intelligence (AI) and robotics have today signed and published an <a href="http://futureoflife.org/public/misc/open_letter_autonomous_weapons">open letter</a> calling for a ban on offensive autonomous weapons, also known colloquially as “killer robots”. </p>
<p>The letter has also been signed by many technologists and experts, including SpaceX and Tesla CEO Elon Musk, physicist Stephen Hawking, Apple co-founder Steve Wozniak, Skype co-founder Jaan Talinn and linguist and activist Noam Chomsky. </p>
<p><a href="http://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat">Musk</a>, <a href="http://www.bbc.com/news/technology-30290540">Hawking</a> and <a href="http://www.computerworld.com/article/2901679/steve-wozniak-on-ai-will-we-be-pets-or-mere-ants-to-be-squashed-our-robot-overlords.html">Wozniak</a> have all recently warned about the dangers that AI poses to mankind. Though it has to be said, Wozniak thinks humans will be fine if robots take over the world; we’ll just become their pets.</p>
<p>The open letter urges the United Nations to support a ban on offensive autonomous weapons systems. This follows the April meeting of the <a href="http://goo.gl/6KECNS">Convention on Conventional Weapons</a>, held at the UN in Geneva, which <a href="https://theconversation.com/battle-lines-drawn-around-the-legality-of-killer-robots-40556">discussed such an idea</a>.</p>
<p>The letter argues that the deployment of such autonomous weapons is feasible within years and will play a dangerous role in driving the next revolution in warfare.</p>
<p>In the interest of full disclosure, I too have signed this letter. My view is that almost every technology can be used for good or bad. And AI is no different. We therefore need to make a choice as to which path to follow.</p>
<p>Artificial intelligence is a technology that can be used to help tackle many of the pressing problems facing society today: inequality and poverty; the rising cost of health care; the impact of global warming, and many others. But it can also be used to inflict unnecessary harm. And now is the right time to get in place a ban before this next arms race begins. </p>
<p>The open letter – reprinted below – gives a good summary of the arguments for a ban. In short, there is likely to be an arms race in such technology that will revolutionise warfare for the worse.</p>
<h2>What we can learn from history</h2>
<p>As always, we can learn a lot from history. A recent example is the <a href="http://disarmament.un.org/treaties/t/ccwc_p4/text">UN Protocol on Blinding Laser Weapons</a>, which came into force in 1998. The International Committee of the Red Cross argued that the ban was an <em>historic</em> step for humanity, stating that:</p>
<blockquote>
<p>It represents the first time since 1868, when the use of exploding bullets was banned, that a weapon of military interest has been banned before its use on the battlefield and before a stream of victims gave visible proof of its tragic effects. </p>
</blockquote>
<p>Of course, the technology for blinding lasers still exists; medical lasers that correct eyesight are an example of the very same technology. But because of this ban, no arms manufacturer sells blinding lasers. And we don’t have any victims of blinding lasers to care for.</p>
<p>Similarly, a ban on offensive autonomous weapons is not going to prevent the technology for such weapons being developed. After all, it would take only a few lines of code to turn an autonomous car into an offensive weapon. But a ban would ensure enough stigma and consequences if breached that we are unlikely to see conventional military forces using them. </p>
<p>This won’t stop terrorist and other smaller groups who care little for UN protocols, but they will be constrained on two levels. First, they’ll have to develop the technology themselves. They won’t be able to go out and buy any such weapons. And second, conventional military forces can still use any <em>defensive</em> technologies they like to protect themselves.</p>
<h2>Now is the time</h2>
<p>With this open letter, we hope to bring awareness to a dire subject which, without a doubt, will have a vicious impact on the whole of mankind. </p>
<p>We can get it right at this early stage, or we can stand idly by and witness the birth of a new era of warfare. Frankly, that’s not something many scientists in this field want to see.</p>
<p>Our call to action is simple: <em>ban offensive autonomous weapons and, in doing so, securing a safe future for us all.</em></p>
<p>A press conference releasing the open letter to the public will be held at the opening of the <a href="http://ijcai-15.org/">International Joint Conference on AI</a> at 9pm AEST, July 28, 2015. To watch the streaming of the press conference on Periscope (live or for the next 24 hours), follow <a href="https://twitter.com/tobywalsh">@TobyWalsh</a> on Twitter for notification of the stream. </p>
<hr>
<p><em>The following is the entire text of the open letter:</em></p>
<p>Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.</p>
<p>Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing etc. Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.</p>
<p>Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons – and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.</p>
<p>In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.</p><img src="https://counter.theconversation.com/content/44577/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh receives funding from the ARC, AOARD, and the Humboldt Foundation.</span></em></p>We need to ban offensive autonomous weapons - or ‘killer robots’ - before a new arms race to produce them begins.Toby Walsh, Professor, Research Group Leader, Optimisation Research Group , Data61Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/405562015-04-24T04:21:41Z2015-04-24T04:21:41ZBattle lines drawn around the legality of ‘killer robots’<figure><img src="https://images.theconversation.com/files/79222/original/image-20150424-25578-1jf0v8q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It's only a small step forward before drones like this one could operate entirely autonomously.</span> <span class="attribution"><span class="source">KAZ Vorpal/Flickr</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>The future of lethal autonomous weapon systems (LAWS) – often referred to in the popular press as “<a href="https://theconversation.com/au/topics/lethal-autonomous-robots">killer robots</a>” – remains uncertain following a <a href="http://goo.gl/n9kq2e">week-long meeting in Geneva</a> to <a href="https://theconversation.com/machines-with-guns-debating-the-future-of-autonomous-weapons-systems-39795">discuss their legality</a>.</p>
<p>While the LAWS debate in Geneva was deeper and richer than previous discussions, key definitions – which are needed to word <a href="http://goo.gl/VjV8bE">a protocol</a> to restrict them – remain unclear and up for continued debate. </p>
<p>And with nations like the United Kingdom openly <a href="http://goo.gl/hq9V0j">opposed to a ban</a>, a protocol may end up being blocked entirely, much to to <a href="http://www.theguardian.com/politics/2015/apr/13/uk-opposes-international-ban-on-developing-killer-robots">the chagrin of activists</a>.</p>
<p>The British say existing international humanitarian law (IHL) is sufficient to regulate LAWS. While there was universal agreement among delegations that key IHL principles such as <a href="https://www.icrc.org/customary-ihl/eng/docs/v1_cha_chapter1_rule1">distinction</a>, <a href="https://www.icrc.org/customary-ihl/eng/docs/v1_cha_chapter4_rule14">proportionality</a> and <a href="https://www.icrc.org/customary-ihl/eng/docs/v1_cha_chapter5_rule15">precautions</a> in attack apply to LAWS, there were sharp differences of opinion as to whether machines can be programmed to observe such distinctions. </p>
<p>The UK has taken the view that programming might in future represent an acceptable form of meaningful human control, and research into such possibilities should not be pre-emptively banned. In future, they might even reduce civilian casualties. The <a href="http://goo.gl/EcYT4h">Czechs</a> (a NATO ally) also expressed caution about a ban. </p>
<p>However, other nations repeated their calls for a ban, including <a href="http://goo.gl/kKSq3d">Cuba</a> and <a href="http://goo.gl/TBhlGa">Ecuador</a>.</p>
<h2>Down with the robots</h2>
<p>Still, for the <a href="http://www.stopkillerrobots.org/">Campaign to Stop Killer Robots</a>, British opposition is surely a major concern. The UK has a veto on the UN Security Council. British allies such as Australia and the US might decline to support a ban. Battle lines have been drawn. Definitions will be critical.</p>
<p>Clearly the British will defend their national interest in drone technology. BAE’s <a href="http://www.baesystems.com/enhancedarticle/BAES_157659/taranis?_afrLoop=40293971604000&_afrWindowMode=0&_afrWindowId=null#!%40%40%3F_afrWindowId%3Dnull%26_afrLoop%3D40293971604000%26_afrWindowMode%3D0%26_adf.ctrl-state%3Dbs81fulnj_4">Taranis</a> – the long range stealth drone under development by UK multinational defence contractor BAE Systems – is a likely candidate for some sort of “state of the art” lethal autonomy. </p>
<p>Interestingly, BAE Systems is also on the consortium that is developing the <a href="http://www.baesystems.com/product/BAES_019772/f-35-lightning-ii?_afrLoop=3594690839489000&_afrWindowMode=0&_afrWindowId=1dv1ke5faz_1">F-35 Lightning II</a>, widely said to be the last manned fighter the US will develop. </p>
<p>Sooner or later there will be a trial dogfight between the F-35 and Taranis. It will be the Air Force equivalent of <a href="https://theconversation.com/how-computers-changed-chess-20772">Kasparov vs Deep Blue</a>. In the long run, most analysts think air war will go the way of chess and become “unsurvivable” for human pilots. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/79217/original/image-20150424-25558-3r1myl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/79217/original/image-20150424-25558-3r1myl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=403&fit=crop&dpr=1 600w, https://images.theconversation.com/files/79217/original/image-20150424-25558-3r1myl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=403&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/79217/original/image-20150424-25558-3r1myl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=403&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/79217/original/image-20150424-25558-3r1myl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=506&fit=crop&dpr=1 754w, https://images.theconversation.com/files/79217/original/image-20150424-25558-3r1myl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=506&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/79217/original/image-20150424-25558-3r1myl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=506&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The UK’s position to stifle a ban on LAWS was a setback for the Campaign to Stop Killer Robots.</span>
<span class="attribution"><span class="source">Global Panorama/Flickr</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<h2>Definitional issues</h2>
<p>At the Geneva meeting, many nations and experts supported the idea of “meaningful human control” of LAWS, including <a href="http://goo.gl/D8q7S6">Denmark</a> and <a href="http://goo.gl/iNpr6E">Maya Brehm</a>, from the Geneva Academy of International Humanitarian Law and Human Rights. Although others, such as <a href="http://goo.gl/ppyexG">France</a> and former British Former Air Commodore, <a href="http://goo.gl/KBAast">W. H. Boothby</a>, thought it too vague. </p>
<p>The <a href="http://goo.gl/cE3nss">Israelis</a> noted that “even those who did choose to use the phrase ‘meaningful human control’, had different understandings of its meaning”. Some say this means “human control or oversight of each targeting action in real-time”. Others argue “the preset by a human of certain limitations on the way a lethal autonomous system would operate, may also amount to meaningful human control”.</p>
<p>It is perhaps a little disappointing that, after two meetings, basic definitions that would be needed to draft a Protocol VI of the Convention on Certain Conventional Weapons (<a href="http://goo.gl/3tBCHG">CCW</a>) to regulate or ban LAWS remain nebulous. </p>
<p>However, UN Special Rapporteur on extrajudicial, summary or arbitrary executions, <a href="http://goo.gl/wGMl0b">Christoph Heyns</a>, has been impressed by the speed and also the “creativity and vigour” that various bodies have brought to the discussions. </p>
<p>Most nations accept that “fully autonomous weapons” that could operate without “meaningful human control” are undesirable, though there is no agreement on the meaning of “autonomous” either. </p>
<p>Some states, such as <a href="http://conf.unog.ch/digitalrecordings/player.html?guid=public/5704F69F-ED89-4CE3-883A-3E29F0651F20&position=6614">Palestine</a> and <a href="http://goo.gl/ak4EZs">Pakistan</a>, are happy to put drones in this category and move to ban their production, sale and use now. Others, such as <a href="http://goo.gl/25pl8k">Denmark</a> and the <a href="http://goo.gl/ORa8iy">Czech Republic</a>, maintain that no LAWS yet exist. </p>
<p>This is another definitional problem. <a href="http://goo.gl/SUiRRP">Paul Scharre’s presentation</a> was a good summary of how we might break up autonomy into definable elements.</p>
<h2>Future of war</h2>
<p>Aside from the definitional debates there were interesting updates from experts in the field of artificial intelligence (AI) and robotics. </p>
<p>Face and gait recognition by AI, according to <a href="http://goo.gl/qipbD6">Stuart Russell</a>, is now at “superhuman” levels. While he stressed this did not imply that robots could distinguish between combatant and civilian as yet, it is a step closer. Russell takes the view that “can robots comply with IHL?” is the wrong question. It is more relevant to ask what the consequence of a robotic arms race would be.</p>
<p><a href="http://goo.gl/I73v5x">Patrick Lin</a> made interesting observations on the ethical notion of human dignity in the context of LAWS. Even if LAWS could act in accordance with IHL, taking of human life by machines violates a right to dignity that may even be more fundamental to the right to life. </p>
<p><a href="http://goo.gl/XZKQsg">Jason Miller</a> spoke on moral psychology and interface design. Morally irrelevant situational factors can seriously compromise human moral performance and judgement.</p>
<p><a href="http://goo.gl/WHQ1DC">Michael Horowitz</a> presented polling data showing that people in India and the United States were not necessarily firmly opposed to LAWS. Horowitz’s key finding was that context matters. What the LAWS is doing when cast in the pollster’s story is significant. How you frame the question makes a significant difference to the approval numbers your poll generates. </p>
<p>Overally, the meeting was a step forward in the debate around the status and legality of lethal autonomous weapons. Although that debate – and it implications on the future of warfare – is still far from settled.</p><img src="https://counter.theconversation.com/content/40556/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sean Welsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The debate over whether lethal autonomous weapon systems (LAWS) – often called ‘killer robots’ – should be banned continues, although it’s far from settled.Sean Welsh, Doctoral Candidate in Robot Ethics, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/397952015-04-12T20:31:45Z2015-04-12T20:31:45ZMachines with guns: debating the future of autonomous weapons systems<figure><img src="https://images.theconversation.com/files/77603/original/image-20150410-6857-yry6bd.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The future of warfare might involve autonomous weapon systems, such as the BAE Taranis, although some are unsettled by the idea of giving machines lethal capabilities.</span> <span class="attribution"><span class="source">Mike Young</span></span></figcaption></figure><p>The roles played by autonomous weapons will be discussed at a meeting in Geneva, Switzerland, this week which could have far reaching ramifications for the future of war.</p>
<p>The second <a href="http://www.unog.ch/80256EE600585943/%28httpPages%29/6CE049BE22EC75A2C1257C8D00513E26?OpenDocument">Expert Meeting on Lethal Autonomous Weapons Systems (LAWS)</a> will discuss issues surrounding what have been dubbed by some as “<a href="https://theconversation.com/au/topics/lethal-autonomous-robots">killer robots</a>”, and whether they ought to be permitted in some capacity or perhaps banned altogether.</p>
<p>The discussion falls under the purview of the Convention on Certain Conventional Weapons (<a href="https://www.icrc.org/ihl/INTRO/500?OpenDocument">CCW</a>), which has five protocols already covering non-detectable fragments, mines and booby traps, incendiary weapons, blinding lasers and the explosive remnants of war. </p>
<p>Australia and other parties to the CCW will consider policy questions about LAWS and whether there should be a sixth protocol added to the CCW that would regulate or ban LAWS.</p>
<p>There are generally two broad views on the matter:</p>
<ol>
<li><p>LAWS should be put in the same category as biological and chemical weapons and comprehensively and pre-emptively banned.</p></li>
<li><p>LAWS should put in the same category as precision-guided weapons and regulated.</p></li>
</ol>
<p>The Campaign to Stop Killer Robots (<a href="http://www.stopkillerrobots.org/">CSKR</a>) argues for a ban on LAWS similar to the ban on blinding lasers in <a href="http://www.unog.ch/ccw">Protocol IV of the CCW</a> and the ban on anti-personnel landmines in the <a href="https://www.icrc.org/ihl/INTRO/580">Ottawa Treaty</a>. They argue that killer robots must be stopped before they proliferate and that tasking robots with human destruction is fundamentally immoral.</p>
<p>Others disagree, such as <a href="http://www.cc.gatech.edu/aimosaic/faculty/arkin/">Professor Ron Arkin</a> of Georgia Tech in the US, who argues that robots should be regarded more as the next generation of “smart” bombs. </p>
<p>They are potentially more accurate, more precise, completely focused on the strictures of International Humanitarian Law (IHL) and thus, in theory, preferable even to human war fighters who may panic, seek revenge or just plain stuff up. Malaysian Airlines flight MH17, after all, appears to have been shot down by “meaningful human control”. </p>
<p>Only five nations currently support a ban on LAWS: Cuba, Ecuador, Egypt, Pakistan and the Holy See. None are not known for their cutting edge robotics. Japan and South Korea, by contrast, have big robotics industries. South Korea has already <a href="http://www.globalsecurity.org/military/world/rok/sgr-a1.htm">fielded</a> the <a href="https://www.youtube.com/watch?v=pMkV8E2re9U">Samsung SGR-A1</a> “sentry robots” on its border with North Korea. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/77584/original/image-20150410-15250-r6xdzm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/77584/original/image-20150410-15250-r6xdzm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/77584/original/image-20150410-15250-r6xdzm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=403&fit=crop&dpr=1 600w, https://images.theconversation.com/files/77584/original/image-20150410-15250-r6xdzm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=403&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/77584/original/image-20150410-15250-r6xdzm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=403&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/77584/original/image-20150410-15250-r6xdzm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=506&fit=crop&dpr=1 754w, https://images.theconversation.com/files/77584/original/image-20150410-15250-r6xdzm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=506&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/77584/original/image-20150410-15250-r6xdzm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=506&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Not everyone is thrilled about the idea of allowing autonomous weapons systems loose on, or off, the battlefield.</span>
<span class="attribution"><span class="source">Global Panorama/Flickr</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<h2>Definitions</h2>
<p>At the end of <a href="http://www.un.org/apps/news/story.asp?NewsID=47794#.VScWQfmUdXE">last year’s meeting</a>, most nations were non-committal. There were repeated calls for better definitions and more discussions, such as from Sweden, Germany, Russia and China. </p>
<p>Few nations have signed up to the CSKR’s view that “<a href="http://www.stopkillerrobots.org/the-problem/">the problem</a>” has to be solved quickly before it is too late. Most diplomats are asking what exactly would they like to ban and why?</p>
<p>The UK government has suggested that existing international humanitarian law provides <a href="http://www.publications.parliament.uk/pa/cm201314/cmhansrd/cm130617/debtext/130617-0004.htm">sufficient regulation</a>. The British interest is that BAE Systems is working on a combat drone called <a href="http://www.baesystems.com/enhancedarticle/BAES_157659/taranis;baeSessionId=A3SR2wBt8CK1R5KeRW3lEDmDtC1yZiJf-6scL0VzucWei8QogrRK!220654963?_afrLoop=2231734104547000&_afrWindowMode=0&_afrWindowId=null#!%40%40%3F_afrWindowId%3Dnull%26_afrLoop%3D2231734104547000%26_afrWindowMode%3D0%26_adf.ctrl-state%3D1608dngiaf_4">Taranis</a>, which might be equipped with lethal autonomy and replace the <a href="http://www.baesystems.com/product/BAES_019765/tornado-gr4?_afrLoop=2482367002824000&_afrWindowMode=0&_afrWindowId=null#!%40%40%3F_afrWindowId%3Dnull%26_afrLoop%3D2482367002824000%26_afrWindowMode%3D0%26_adf.ctrl-state%3Dzu1ar5ksu_4">Tornado</a>. </p>
<p>LAWS are already regulated by existing International Humanitarian Law. According to the Red Cross, <a href="http://goo.gl/MmqWge">no expert disputes this</a>. LAWS that cannot comply with IHL principles, such as <a href="https://www.icrc.org/customary-ihl/eng/docs/v1_cha_chapter1_rule1">distinction</a> and <a href="https://www.icrc.org/customary-ihl/eng/docs/v1_cha_chapter4_rule14">proportionality</a> are already illegal. LAWS are already required to go through <a href="https://www.icrc.org/applic/ihl/ihl.nsf/WebART/470-750045?OpenDocument">Article 36 review</a> before being fielded, just like any other new weapon. </p>
<p>As a result, the suggestion by the CSKR that swift action is required is not, as yet, gaining diplomatic traction. As their own <a href="http://www.stopkillerrobots.org/wp-content/uploads/2015/03/KRC_CCWexperts_Countries_25Mar2015.pdf">compilation report</a> shows, most nations have yet to grasp the issue, let alone commit to policy.</p>
<p>The real problem for the CSKR is that a LAWS is a combination of three hard to ban components: </p>
<ol>
<li><p>Sensors (such as radars) which have legitimate civilian uses</p></li>
<li><p>“Lethal” cognition (i.e. computer software that targets humans), which is not much different from “non-lethal” cognition (i.e. computer software that targets “virtual” humans in a video game)</p></li>
<li><p>“Lethal” actuators (i.e. weapons such as <a href="http://fas.org/man/dod-101/sys/missile/agm-114.htm">Hellfire missiles</a>), which can also be directly controlled by a human “finger on the button” and are not banned <em>per se</em>. </p></li>
</ol>
<p>Japan has already indicated it will oppose any ban on “<a href="http://goo.gl/BTcH8Z">dual-use</a>” components of a LAWS. The problem is that everything in a LAWS is dual-use – the “autonomy” can be civilian, the lethal weapons can be human operated, for example. What has to be regulated or banned is a combination of components, not any one core component.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/77583/original/image-20150410-15216-pjfvgp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/77583/original/image-20150410-15216-pjfvgp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/77583/original/image-20150410-15216-pjfvgp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=455&fit=crop&dpr=1 600w, https://images.theconversation.com/files/77583/original/image-20150410-15216-pjfvgp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=455&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/77583/original/image-20150410-15216-pjfvgp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=455&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/77583/original/image-20150410-15216-pjfvgp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=572&fit=crop&dpr=1 754w, https://images.theconversation.com/files/77583/original/image-20150410-15216-pjfvgp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=572&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/77583/original/image-20150410-15216-pjfvgp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=572&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Close In Weapon Systems already autonomously react to and shoot down incoming missiles without requiring a human to pull the trigger.</span>
<span class="attribution"><span class="source">Stephanie Smith/U.S. Navy</span></span>
</figcaption>
</figure>
<h2>Out of the loop?</h2>
<p>The phrase “meaningful human control” has been articulated by numerous diplomats as a desired goal of regulation. There is much talk of humans and “loops” in the LAWS debate:</p>
<ul>
<li><p>Human “in the loop”: the robot makes decisions according to human-programmed rules, a human hits a confirm button and the robot strikes. Examples are the <a href="http://www.raytheon.com/capabilities/products/patriot/">Patriot missile system</a> and Samsung’s SGR-A1 in “normal” mode.</p></li>
<li><p>Human “on the loop”: the robot decides according to human-programmed rules, a human has time to hit an abort button, and if the abort button is not hit, then robot strikes. Examples would be the <a href="http://www.raytheon.com/capabilities/products/phalanx/">Phalanx Close-In Weapon System</a> or the Samsung SGR-A1 in “invasion” mode, where the sentry gun can operate autonomously.</p></li>
<li><p>Human “off the loop”: the robot makes decisions according to human-programmed rules, the robot strikes, and a human reads a report a few seconds or minutes later. An example would be any “on the loop” LAWS with a broken or damaged network connection.</p></li>
</ul>
<p>It could be that a Protocol VI added to the CCW bans “off the loop” LAWS, for example. Although the most widely fielded extant LAWS are “off the loop” weapons such as anti-tank and anti-ship mines that have been legal for decades. </p>
<p>As such, diplomats might need a fourth category:</p>
<ul>
<li>Robot “beyond the loop” the robot decides according to rules it learns or creates itself, the robot strikes, and the robot may or may not bother to let humans know.</li>
</ul>
<p>The meeting taking place this week will likely wrestle with these definitions, and it will be interesting to see if any resolution or consensus emerges, and what implications that might have on the future of war.</p><img src="https://counter.theconversation.com/content/39795/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sean Welsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Should future wars be fought by autonomous systems? Or do they pose such a threat that they should be banned? These issues are being debated this week by diplomats from around the world.Sean Welsh, Doctoral Candidate in Robot Ethics, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/231502014-02-13T14:46:18Z2014-02-13T14:46:18ZKiller robot drones are like drugs: regulate, but resist the urge to ban them<figure><img src="https://images.theconversation.com/files/41384/original/v4phdrhd-1392216988.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Taranis in flight.</span> <span class="attribution"><span class="source">BAE Systems</span></span></figcaption></figure><p>BAE Systems has revealed that it has <a href="http://www.baesystems.com/enhancedarticle/BAES_157659/taranis-unmanned?_afrLoop=49001152028000&_afrWindowMode=2&_afrWindowId=null">successfully test-flown</a> Taranis, its prototype Unmanned Aerial Vehicle. </p>
<p>The test has some people understandably hot under the collar. But while there is much to debate on the detail, the answer to the biggest question of all, whether or not we should ban drones, is unequivocal. We should not. Like effective but dangerous drugs, the answer is not to ban them. It’s to subject their development to rigorous testing and regulation.</p>
<p>BAE’s video footage shows a sleek boomerang-shaped blade cruising sedately over the Australian outback. Taranis is a stealth aircraft, designed to evade radar. It is pilotless, meaning it can manoeuvre in ways that would cause a human to black out if they were on board. And crucially, it’s a step on the way to drones that can make autonomous targeting decisions. More bluntly, it’s a step towards killer robots taking to the sky. </p>
<p>It’s not difficult to see why the idea of killer robots causes alarm. Some worry that these machines won’t be able to distinguish reliably between soldiers and civilians and will end up killing innocents. Others imagine Terminator-style wars between robots and people.</p>
<p><a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1468-5930.2007.00346.x/abstract">Philosophers</a> get in on the act too, arguing that enabling machines to decide who to kill is a fundamental breach of the conditions of just war. For it is unclear who should be held responsible when things go wrong and a drone kills the wrong targets. It can’t be the dumb robot. Nor can it be the soldier who sends it to battle, because he or she only decides whether to use it, not what it’s going to do. It can’t be the designers, because the whole point is that they have created a system able to make autonomous choices about what to target.</p>
<p>This is all smoke and mirrors. The <a href="https://theconversation.com/better-to-know-your-enemy-when-taking-on-a-killer-robot-20330">anti-killer-robot campaigners</a> are right when they say now is the time to debate whether this technology is forbidden fruit, better for all if left untouched. They are also right to worry whether killer robots will observe the laws of war. There is no question that killer robots should not be deployed unless they observe those laws with at least the same (sadly inconsistent) reliability as soldiers. But there is no mystery as to how we will achieve that reliability and with it resolve how to ascribe moral responsibility.</p>
<p>There is an analogy here with medicines. Their effects are generally predictable, but a risk of unpleasant side-effects remains. So we cautiously test new drugs during development and only then license them for prescription. When prescribed in accordance with the guidelines, we don’t hold doctors, drug companies, or the drugs to account for any bad side-effects that might occur. Rather, the body which approves the medicine is responsible for ensuring overall beneficial outcomes.</p>
<p>So too with killer robots. What we need is a thorough regulatory process. This will test their capabilities and allow them to be deployed only when they reliably observe the laws of war.</p><img src="https://counter.theconversation.com/content/23150/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tom Simpson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>BAE Systems has revealed that it has successfully test-flown Taranis, its prototype Unmanned Aerial Vehicle. The test has some people understandably hot under the collar. But while there is much to debate…Tom Simpson, Associate Professor of Philosophy and Public Policy, University of OxfordLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/203302013-11-15T06:27:41Z2013-11-15T06:27:41ZBetter to know your enemy when taking on a killer robot<figure><img src="https://images.theconversation.com/files/35316/original/zwxgvwhv-1384449785.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Would you trust this guy with a surface to air missile?</span> <span class="attribution"><span class="source">Stop Killer Robots</span></span></figcaption></figure><p><a href="http://www.stopkillerrobots.org/2013/11/seize-the-opportunity-for-action/">The Campaign to Stop Killer Robots</a>, a network of NGOs and academics, has done us all a valuable service by drawing attention to the development of unmanned systems that are able to kill without direct supervision by a human being.</p>
<p>The campaign wants the issue to be taken up by the UN Convention on Conventional Weapons and is calling on member states to discuss putting it on the agenda at a meeting in Geneva today.</p>
<p>However, the campaign has not been entirely clear about what Killer Robots are, despite implying that a ban on such robots would be its preference.</p>
<p>In order to understand the background to the campaign, it is necessary to engage with contemporary developments in military technology. Recent advances in sensor technology and satellite communications have made it possible to deploy remotely-piloted aerial vehicles (RPAVs), also colloquially known as drones. Contemporary RPAVs have sensors that enable them to record images and transmit them via a satellite link to an operator who might be thousands of miles away. Based on the information they receive from the system, operators can issue commands to the drone via the satellite link.</p>
<p>With a click on a joystick the operator can launch a deadly attack on a target. <a href="http://science.howstuffworks.com/predator.htm">The Predator drone</a>, probably the best known RPAV, is equipped with Hellfire missiles, which weigh around 100lbs and can take out armoured vehicles. RPAVs, it must be stressed, are not illegal under international law, but the fact that they essentially allow us to kill by remote control makes many feel uneasy.</p>
<p>This feeling of uneasiness is likely to grow stronger when one considers the future development of weapons where a human operator isn’t even needed to apply deadly force to a target.</p>
<p>Arguably, today’s RPAVs offer the blueprint for tomorrow’s autonomous weapons. For instance, the <a href="http://www.baesystems.com/product/BAES_020273/taranis?_afrLoop=208544484532000&_afrWindowMode=0&_afrWindowId=null&baeSessionId=s3tZSG7NdJ2Wrm7QNnSWP07521JQhsjJmCvf1226HYLqwCTCf9h5!-1741897545#%40%3F_afrWindowId%3Dnull%26baeSessionId%3Ds3tZSG7NdJ2Wrm7QNnSWP07521JQhsjJmCvf1226HYLqwCTCf9h5%2521-1741897545%26_afrLoop%3D208544484532000%26_afrWindowMode%3D0%26_adf.ctrl-state%3Dilip0gr67_4">Taranis aircraft</a>, currently being developed by BAE Systems, is a small stealth plane that can be programmed with a mission, fly into enemy territory and attack radar stations without assistance from a human operator.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/35318/original/2nww7w8v-1384450862.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/35318/original/2nww7w8v-1384450862.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/35318/original/2nww7w8v-1384450862.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/35318/original/2nww7w8v-1384450862.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/35318/original/2nww7w8v-1384450862.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/35318/original/2nww7w8v-1384450862.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=501&fit=crop&dpr=1 754w, https://images.theconversation.com/files/35318/original/2nww7w8v-1384450862.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=501&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/35318/original/2nww7w8v-1384450862.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=501&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The X-47B.</span>
<span class="attribution"><span class="source">U S Air Force Rob Densmore</span></span>
</figcaption>
</figure>
<p>Unlike Taranis, <a href="http://www.naval-technology.com/projects/x-47b-unmanned-combat-air-system-carrier-ucas/">Lockheed Martin’s X47-B</a>, another unmanned plane, has not yet been equipped with a payload. However, the X47-B can already take off and land on an aircraft carrier without being remote-controlled by an operator. Taking the long view, systems like the Taranis and X47-B are the tip of the iceberg when it comes to automated weapons that do not require direct human supervision. Who knows where we will be in fifty years.</p>
<p>This is precisely what troubles the Campaign to Stop Killer Robots. But one problem with the campaign’s strategy is that it targets systems which currently do not exist. Thus the notion of a killer robot remains obscure.</p>
<p>It has been established that killer robots can select and engage targets without an operator but that’s not actually something new – there are already systems in place that can do this. Many missile defence systems installed on warships, for example, are automated. They are legal under current international law and it’s not clear why such systems should be banned. For instance, they considerably reduce the burden on human operators because they are more capable of making faster calculations about potential threats than humans. In some cases, automated systems represent the next stage on from precision-guided weapons. And precision-guided weapons are surely preferable to the blunter tools of warfare. </p>
<p>The campaign probably has something different in mind. Maybe its call for a ban on autonomous weapons is a call for a ban on future weapons that could generate targeting decisions themselves. These would be weapons that could decide whether a particular object constitutes a legitimate military target in the first place. In this case, the machine itself would have to be capable of applying the legal criteria pertaining to targeting. And it is hard to see how a machine could do that.</p>
<p>Applying the law, especially insofar as the use of armed force is concerned, is not a matter of simple rule following that could be programmed into a machine. Rather, international law contains a number of grey areas, which require significant legal interpretation and judgement. What constitutes an enemy? What should you do if civilians are found to be in close range of your target? How do you balance risk against the goals of your mission? When would harm be excessive or disproportionate? It is inconceivable how a machine could make these judgements. It is already hard enough for human beings to do so.</p>
<p>For these reasons, the Campaign to Stop Killer Robots is right that a ban would be appropriate, should anyone be seriously interested in developing these weapons. Given contemporary developments, however, this could be in doubt. Governments and armies are interested in developing autonomous systems, such as the Taranis aircraft, that are capable of enforcing a targeting decision made by a human being without support from an operator.</p>
<p>There seems to be little enthusiasm in military and governmental circles for the development of weapons that can generate their own targeting decisions, even if this was technologically possible. Such systems would contravene a central principle of the armed forces: “command & control”. In any case, if it wants to make a real difference and get this issue on the international agenda, the Campaign to Stop Killer Robots needs to be clear about which systems it wants to see banned and why.</p><img src="https://counter.theconversation.com/content/20330/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alex Leveringhaus works on a research project on emerging military technologies funded by the Netherlands Organisation for Scientific Research (NWO). The project is run in cooperation between the Oxford Institute for Ethics, Law and Armed Conflict at the University of Oxford and Delft University of Technology, NL. As part of the project, Alex has worked with members of government and the NGO sector, including the Dutch Foreign Ministry and various civil society groups. </span></em></p>The Campaign to Stop Killer Robots, a network of NGOs and academics, has done us all a valuable service by drawing attention to the development of unmanned systems that are able to kill without direct…Alexander Leveringhaus, James Martin Fellow, University of OxfordLicensed as Creative Commons – attribution, no derivatives.