tag:theconversation.com,2011:/au/topics/autonomous-weapons-40614/articlesAutonomous weapons – The Conversation2023-10-11T19:02:28Ztag:theconversation.com,2011:article/2153382023-10-11T19:02:28Z2023-10-11T19:02:28ZHow drone submarines are turning the seabed into a future battlefield<figure><img src="https://images.theconversation.com/files/553149/original/file-20231011-21-odd5ct.jpg?ixlib=rb-1.1.0&rect=13%2C17%2C2982%2C1666&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.darpa.mil/news-events/2019-11-13">DARPA</a></span></figcaption></figure><p><em>A 12-tonne fishing boat weighs anchor three kilometres off the port of Adelaide. A small crew huddles over a miniature submarine, activates the controls, primes the explosives, and releases it into the water. The underwater drone uses sensors and sonar to navigate towards its pre-programmed target: the single, narrow port channel responsible for the state’s core fuel supply …</em></p>
<p>You can guess the rest. A blockage, an accident, an explosion – any could be catastrophic for Australia, a country that conducts <a href="https://navalinstitute.com.au/wp-content/uploads/Protecting-Australian-Maritime-Trade-Report-March-2020.pdf">99% of trade by sea</a> and imports more than 90% of its fuel.</p>
<p>As drone submarines or “uncrewed underwater vehicles” (UUVs) become cheaper, more common and more sophisticated, Australia’s 34,000km of coastline will face a significant future threat.</p>
<p>What can be done? Our <a href="https://www.rmit.edu.au/research/centres-collaborations/cyber-security-research-innovation/autonomous-uncrewed-underwater-vehicles">assessment</a> – validated through workshops with experts from across Australia – shows the same technologies can aid our maritime security, if we build them into our planning from now on. </p>
<h2>Seabed warfare</h2>
<p>Australia is not alone in its rising concern for submarine security. In 2022, France launched its <a href="https://www.archives.defense.gouv.fr/content/download/636001/10511909/file/20220214_FRENCH%20SEABED%20STRATEGY.pdf">Seabed Warfare Strategy</a> to address autonomous underwater maritime threats. In February 2023, NATO established an <a href="https://www.nato.int/cps/en/natohq/news_211919.htm">Undersea Infrastructure Coordination Cell</a> in response to the sabotage of the Nord Stream gas line in September 2022.</p>
<p>The war in Ukraine has seen relatively small, cheap aerial drones play an outsized role. At a smaller scale, <a href="https://www.9news.com.au/world/ukrainian-sea-drone-attack-what-are-they-how-do-they-impact-war-explainer/7aa4d632-a8aa-4686-bc6b-a7aacfbe646c">underwater drones</a> have also enabled Ukraine to conduct asymmetric attacks on Russian forces.</p>
<p>Current drones can be used in intelligence, surveillance, reconnaissance, mine countermeasures, antisubmarine warfare, electronic warfare, underwater sensor grid development and special operations, among other things. </p>
<p>However, their capabilities are likely to expand. China’s Haidou-1 project dived to a <a href="https://www.scmp.com/news/china/science/article/3152076/chinas-haidou-1-reaches-new-depths-exploring-pacific-ocean-floor">record depth</a> of 10,908 metres. </p>
<p>A Chinese underwater glider, the Haiyan, holds the drone sub endurance record with a 3,600km voyage over 141 days across the South China Sea. Russia boasts of having a prototype nuclear-powered, nuclear-armed <a href="https://en.wikipedia.org/wiki/Status-6_Oceanic_Multipurpose_System">undersea drone</a>, although <a href="https://thebulletin.org/2023/06/one-nuclear-armed-poseidon-torpedo-could-decimate-a-coastal-city-russia-wants-30-of-them/">some analysts doubt</a> it really exists.</p>
<p>Nations are also developing broader programs to control underwater sea domains. </p>
<p>For instance, the United States’ proposed Advanced Undersea Warfare System envisions a network of fixed submarine stations able to deploy defensive and offensive drones. In the South China Sea, China is developing an “<a href="https://maritimeindia.org/chinas-undersea-great-wall-project-implications/">Underwater Great Wall</a>” of ships, bases and drone (both at surface level and beneath) to monitor the area and make it difficult for foreign navies to operate in international waters.</p>
<h2>A new age of war at sea?</h2>
<p>Some analysts argue these developments amount to the dawn of a “<a href="https://www.rand.org/blog/2022/11/the-age-of-uncrewed-surface-vessels.html">new age of naval warfare</a>”. Others suggest autonomous maritime systems, as they grow cheaper and more effective, may become preferred over crewed vehicles for national defence: by <a href="https://theconversation.com/ukraine-how-uncrewed-boats-are-changing-the-way-wars-are-fought-at-sea-201606">one estimate</a>, uncrewed vessels may make up more than half of the US naval fleet by 2052.</p>
<p>The advent of sea drones may also encourage the further growth of hybrid or “grey zone” approaches to conflict, which avoid outright warfare, keep casualties low, and can inflict heavy costs on enemies. In this context, uncrewed marine vessels may offer states a deniable way to carry out aggressive actions to advance their aims without crossing the threshold of war. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ukraine-how-uncrewed-boats-are-changing-the-way-wars-are-fought-at-sea-201606">Ukraine: how uncrewed boats are changing the way wars are fought at sea</a>
</strong>
</em>
</p>
<hr>
<p>Put differently, drone submarines may lend themselves to creating apparent accidents and other actions that can’t be pinned on their instigators. It is worth quoting the <a href="https://www.archives.defense.gouv.fr/content/download/636001/10511909/file/20220214_FRENCH%20SEABED%20STRATEGY.pdf">French Seabed Warfare Strategy</a> on this point:</p>
<blockquote>
<p>an attack on the underwater part of submarine cables is a potential cause of action, with possibilities ranging from a “convenient” accident in a coastal area, to deliberate military action. In this regard, the intrinsic features of the seabed make it the ideal theatre for non-attributable actions in “grey zones”.</p>
</blockquote>
<h2>The road ahead for Australia</h2>
<p>Our <a href="http://rmit.edu.au/research/centres-collaborations/cyber-security-research-innovation/autonomous-uncrewed-underwater-vehicles">new research</a> examined the threat to Australia’s trade posed by autonomous, uncrewed underwater vehicles. </p>
<p>With colleagues at the RMIT Centre for Cyber Security Research and Innovation, Charles Darwin University, and WiseLaw, we ran workshops with people from government, the Royal Australian Navy, Defence, industry and academia. We found a growing tension between efforts to protect ocean-borne trade and critical undersea infrastructure today, and more forward-looking strategies aimed at developing the next generation of maritime defence.</p>
<p>Under the AUKUS security pact, Australia has engaged the United Kingdom and the US to buy and build nuclear-powered submarines, and seeks to acquire and develop new systems “with additional undersea capabilities”. This is a good start, but the scale of the purchases has raised <a href="https://www.abc.net.au/news/2023-03-14/nuclear-submarine-aukus-how-cost-impact-military-capability/102089496">concerns</a> they will become all-consuming for Australia’s military.</p>
<p>Australia also engages in exercises such as <a href="https://www.australiandefence.com.au/defence/sea/navy-to-acquire-bluebottle-usvs">Autonomous Warrior</a> to test new and emerging systems in maritime defence. However, these exercises under-examine threats to maritime trade that underwater drones are likely to produce in the future. </p>
<p>One result that emerged from our workshops is that mines are seen as an emerging challenge. Loitering drones with explosives – which could even be commercially available vessels carrying improvised explosives – could hold up commercial ports and traffic, bottle up naval assets, or disrupt maritime shipping routes. This would cause delays, loss of revenue, and increased insurance premiums. </p>
<p>As “set and forget” weapons, mines have an outsized impact as they can cause great damage for a low cost. And they are difficult and costly to find and neutralise.</p>
<p>For the time being, Australia is largely protected from the threat of underwater drones by distance. Current battery and communication technology mean drones would need to be deployed from relatively nearby, and Australia’s maritime environments would make operation difficult.</p>
<p>However, the technology is advancing quickly. The time available for the Australian Department of Defence to address the threat of underwater uncrewed vehicles is shrinking. </p>
<hr>
<p><em>This article draws upon research funded under the Strategic Policy Grants Program run by the Department of Defence. The Strategic Policy Grants Program is an open and competitive mechanism for Defence to support independent research, events and activities. The views expressed herein are those of the authors and are not necessarily those of the Australian Government or Defence.</em></p><img src="https://counter.theconversation.com/content/215338/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Aerial drones are already transforming warfare. Underwater drones are next.Adam Bartley, Postdoctoral Fellow, RMIT Centre for Cyber Security Research and Innovation, RMIT UniversityMatthew Warren, Director, RMIT University Centre for Cyber Security Research and Innovation, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2124442023-08-30T03:18:10Z2023-08-30T03:18:10ZUS military plans to unleash thousands of autonomous war robots over next two years<p>The United States military plans to start using thousands of autonomous weapons systems in the next two years in a bid to counter China’s growing power, US Deputy Secretary of Defense Kathleen Hicks <a href="https://www.defense.gov/News/Speeches/Speech/Article/3507156/deputy-secretary-of-defense-kathleen-hicks-keynote-address-the-urgency-to-innov/">announced</a> in a speech on Monday.</p>
<p>The so-called Replicator initiative aims to work with defence and other tech companies to produce <a href="https://www.defense.gov/News/News-Stories/Article/Article/3507514/hicks-underscores-us-innovation-in-unveiling-strategy-to-counter-chinas-militar/">high volumes of affordable systems</a> for all branches of the military.</p>
<p>Military systems capable of various degrees of independent operation have become increasingly common over the past decade or so. But the scale and scope of the US announcement makes clear the future of conflict has changed: the age of warfighting robots is upon us. </p>
<h2>An idea whose time has come</h2>
<p>Over the past decade, there has been considerable development of advanced robotic systems for military purposes. Many of these have been based on modifying commercial technology, which itself has become more capable, cheaper and more widely available. </p>
<p>More recently, the focus has shifted onto experimenting with how to best use these in combat. Russia’s war in Ukraine has demonstrated that the technology is ready for real-world deployment. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ukraine-war-drones-are-changing-the-conflict-both-on-the-frontline-and-beyond-211460">Ukraine war: drones are changing the conflict – both on the frontline and beyond</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://warontherocks.com/2022/04/loitering-munitions-in-ukraine-and-beyond/">Loitering munitions</a>, a form of robot air vehicle, have been widely used to find and attack armoured vehicles and artillery. Ukrainian naval attack drones <a href="https://www.businessinsider.com/ukraine-sea-drones-paralyzed-russia-black-sea-fleet-spy-chief-2023-8">have paralysed</a> Russia’s Black Sea fleet, forcing their crewed warships to stay in port. </p>
<p>Military robots are an idea whose time has come.</p>
<h2>Robots everywhere</h2>
<p>In her speech, Hicks talked of a perceived urgent need to change how wars are fought. She <a href="https://www.defense.gov/News/Speeches/Speech/Article/3507156/deputy-secretary-of-defense-kathleen-hicks-keynote-address-the-urgency-to-innov/">declared</a>, in somewhat impenetrable Pentagon-speak, that the new Replicator program would </p>
<blockquote>
<p>field attritable autonomous systems at scale of multiple thousands, in multiple domains, within the next 18 to 24 months. </p>
</blockquote>
<p>Decoding this, “autonomous” means a robot that can carry out complex military missions without human intervention. </p>
<p>“Attritable” means the robot is cheap enough that it can be placed at risk and lost if the mission is of high priority. Such a robot is not quite designed to be disposable, but it would be reasonably affordable so many can be bought and combat losses replaced. </p>
<p>Finally, “multiple domains” means robots on land, at sea, in the air and in space. In short, robots everywhere for all kinds of tasks.</p>
<h2>The robot mission</h2>
<p>For <a href="https://www.cbsnews.com/news/pentagon-reviews-say-china-poses-greatest-security-challenge-to-u-s-while-russia-is-acute-threat/">the US military</a>, Russia is an “acute threat” but China is the “pacing challenge” against which to benchmark its military capabilities. </p>
<p>China’s People’s Liberation Army is seen as having a significant advantage in terms of “mass”: it has more people, more tanks, more ships, more missiles and so on. The US may have better-quality equipment, but China wins on quantity. </p>
<p>By quickly building thousands of “attritable autonomous systems”, the Replicator program will now give the US the numbers considered necessary to win future major wars. </p>
<p>The imagined future war of most concern is a hypothetical battle for Taiwan, which <a href="https://thehill.com/policy/defense/3840337-generals-memo-spurs-debate-could-china-invade-taiwan-by-2025/">some postulate</a> could soon begin. Recent <a href="https://www.thedrive.com/the-war-zone/massive-drone-swarm-over-strait-decisive-in-taiwan-conflict-wargames">tabletop wargames</a> have suggested large swarms of robots could be the decisive element for the US in defeating any major Chinese invasion. </p>
<p>However, Replicator is also looking further ahead, and aims to institutionalise mass production of robots for the long term. Hicks argues: </p>
<blockquote>
<p>We must ensure [China’s] leadership wakes up every day, considers the risks of aggression, and concludes, “today is not the day” — and not just today, but every day, between now and 2027, now and 2035, now and 2049, and beyond.</p>
</blockquote>
<h2>A brave new world?</h2>
<p>One great concern about autonomous systems is whether their use can conform to the laws of armed conflict.</p>
<p>Optimists argue robots can be carefully programmed to follow rules, and in the heat and confusion of combat they may even obey better than humans. </p>
<p>Pessimists counter by noting not all situations can be foreseen, and robots may well misunderstand and attack when they should not. They have a point. </p>
<p>Among earlier autonomous military systems, the Phalanx close-in point defence gun and the Patriot surface-to-air missile have both misperformed. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-researchers-should-not-retreat-from-battlefield-robots-they-should-engage-them-head-on-45367">AI researchers should not retreat from battlefield robots, they should engage them head-on</a>
</strong>
</em>
</p>
<hr>
<p>Used only once in combat, during the first Gulf War in 1991, the <a href="http://www.navweaps.com/index_tech/tech-103.php">Phalanx fired</a> at a chaff decoy cloud rather than countering the attacking anti-ship missile. The more modern Patriot has proven effective in shooting down attacking ballistic missiles, but also <a href="https://css.ethz.ch/en/services/digital-library/articles/article.html/976797da-7b8b-4e86-84f4-4052f394d2e1">twice shot down</a> friendly aircraft during the second Gulf War in 2003, killing their human crews.</p>
<p>Clever design may overcome such problems in future autonomous systems. However, Hicks promised a “responsible and ethical approach to AI and autonomous systems” in her speech – which suggests any system able to kill targets will still need formal authorisation from a human to do so. </p>
<h2>A global change</h2>
<p>The US may be the first nation to field large numbers of autonomous systems, but other countries will be close behind. China is an obvious candidate, with great strength in both <a href="https://news.usni.org/2023/06/26/china-looking-to-become-artificial-intelligence-global-leader-report-says">artificial intelligence</a> and <a href="https://www.aljazeera.com/news/2023/1/24/how-china-became-the-worlds-leading-exporter-of-combat-drones">combat drone production</a>.</p>
<p>However, because much of the technology behind autonomous military drones has been developed for civilian purposes, it is widely available and relatively cheap. Autonomous military systems are not just for the great powers, but could also soon be fielded by many middle and smaller powers. </p>
<p><a href="https://thebulletin.org/2021/05/was-a-flying-killer-robot-used-in-libya-quite-possibly/">Libya</a> and <a href="https://www.newscientist.com/article/2282656-israel-used-worlds-first-ai-guided-combat-drone-swarm-in-gaza-attacks/">Israel</a>, among others, have reportedly deployed autonomous weapons, and <a href="https://www.cnbc.com/2023/03/28/killer-drones-turkeys-growing-defense-industry-is-boosting-its-global-clout.html">Turkish-made drones</a> have proved important in the Ukraine war. </p>
<p>Australia is another country keenly interested in the possibilities of autonomous weapons. The Australian Defence Force is today building <a href="https://www.australiandefence.com.au/defence/unmanned/government-accelerates-ghost-bat-program">the MQ-28 Ghostbat</a> autonomous fast jet air vehicle, robot <a href="https://www.aumanufacturing.com.au/bae-systems-turns-m113-personnel-carriers-autonomous">mechanised armoured vehicles</a>, robot <a href="https://www.aumanufacturing.com.au/australian-army-runs-autonomous-highway-truck-convoy">logistic trucks</a> and <a href="https://breakingdefense.com/2022/05/anduril-bets-it-can-build-3-large-autonomous-subs-for-aussies-in-3-years/">robot submarines</a>, and is already using the <a href="https://www.dailytelegraph.com.au/news/national/adf-to-use-sydney-engineering-firms-unmanned-solar-powered-boats-to-patrol-seas/news-story/ba7c6dd18c405c58e71e72da68699c39">Bluebottle robot sailboat</a> for maritime border surveillance in the Timor Sea. </p>
<p>And in a move that foreshadowed the Replicator initiative, the Australian government last month called for local companies to suggest how <a href="https://www.smh.com.au/politics/federal/we-re-trailing-the-world-push-for-aussie-made-defence-drones-20230820-p5dxxu.html">they might build</a> very large numbers of military aerial drones in-country in the next few years. </p>
<p>At least one Australian company, SYPAQ, is <a href="https://www.smh.com.au/politics/federal/aussie-cardboard-drones-used-in-attack-on-russian-airfield-20230829-p5e0bv.html">already on the move</a>, sending a number of its cheap, cardboard-bodied drones to bolster Ukraine’s defences. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1695876431614456186"}"></div></p><img src="https://counter.theconversation.com/content/212444/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Layton does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The age of autonomous weapons is upon usPeter Layton, Visiting Fellow, Griffith Asia Institute, Griffith UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2072012023-06-07T22:00:06Z2023-06-07T22:00:06ZAUKUS is already trialling autonomous weapons systems – where is NZ’s policy on next-generation warfare?<p>Defence Minister Andrew Little’s <a href="https://www.theguardian.com/world/2023/mar/28/new-zealand-may-join-aukus-pacts-non-nuclear-component">recent announcement</a> that New Zealand would be “willing to explore” participation in military technology sharing – or “pillar two” – under the AUKUS security arrangement has already divided opinion.</p>
<p>Proponents <a href="https://www.lowyinstitute.org/the-interpreter/aukus-nz-win-win">have argued</a> participation will enhance New Zealand’s security and help deter China in an increasingly contested geopolitical environment. Critics <a href="https://www.scoop.co.nz/stories/PO2304/S00106/aukus-and-peace.htm">have suggested</a> it would compromise New Zealand’s antinuclear commitment, undermine diplomacy and raise the prospect of a destabilising arms race in the Pacific region.</p>
<p>But missing from the debate so far is any clear analysis of how participation in pillar two of AUKUS might infringe on <a href="https://www.beehive.govt.nz/sites/default/files/2021-11/Autonomous-Weapons-Systems-Cabinet-paper.pdf">New Zealand’s policy approach</a> to autonomous weapons systems (AWS).</p>
<p>That’s because of lack of clarity about two things: what kinds of technology sharing and development would be included under pillar two, and just what New Zealand’s policy position on AWS currently is.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1664663513636143110"}"></div></p>
<h2>What do we know about pillar two?</h2>
<p>When AUKUS was announced, the promise to equip Australia with nuclear-powered submarines naturally dominated headlines. The other focus of the partnership, however, is cooperation on “<a href="https://pmtranscripts.pmc.gov.au/sites/default/files/AUKUS-factsheet.pdf">advanced capabilities</a>”.</p>
<p>While little detail has been released publicly, these capabilities include a range of high-tech applications: undersea robotics and autonomous systems, quantum technologies, AI and autonomy, advanced cyber technologies, hypersonic and counter-hypersonic capabilities, electronic warfare, defence innovation and information sharing.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/approach-with-caution-why-nz-should-be-wary-of-buying-into-the-aukus-security-pact-203915">Approach with caution: why NZ should be wary of buying into the AUKUS security pact</a>
</strong>
</em>
</p>
<hr>
<p>In some ways, pillar two of AUKUS is more significant than pillar one. It is certainly more imminent than the submarine delivery. It may also be “of greater long-term value and more strategically challenging”, <a>according to analysis</a> by the Australian Strategic Policy Institute.</p>
<p>There are a lot of uncertainties with emerging technologies, with no way to predict how they will develop or be adopted for military purposes. They also have more wide-reaching societal and economic implications, since much of the research and development capacity sits in civilian industries and universities.</p>
<h2>AUKUS and autonomous systems</h2>
<p>Ultimately, of course, AUKUS is about competing militarily with China. It’s the “most consequential strategic competitor” of the US and its allies and partners, <a href="https://www.defenceconnect.com.au/key-enablers/12040-us-to-focus-on-collaborative-defence-innovation-with-australia-uk">according to</a> US Assistant Secretary of Defense Mara Karlin.</p>
<p>Pillar two cooperation, Karlin argues, is necessary to accelerate military innovation, enhance interoperability and integrate the “defence industrial base” across partner countries in response to the threat posed by China.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/as-australia-signs-up-for-nuclear-subs-nz-faces-hard-decisions-over-the-aukus-alliance-201946">As Australia signs up for nuclear subs, NZ faces hard decisions over the AUKUS alliance</a>
</strong>
</em>
</p>
<hr>
<p>Last month, it was <a href="https://breakingdefense.com/2023/05/the-ai-side-of-aukus-uk-reveals-ground-breaking-allied-tech-demo/?_ga=2.240332611.1589671307.1685057650-572721413.1642293259">revealed</a> Australia, the US and the UK had held a trial of AUKUS advanced capabilities, focused on AI and autonomy. According to the UK Ministry of Defence, the event succeeded in achieving several “world firsts”, including AI-enabled assets from the three countries successfully operating as a “swarm”.</p>
<p>The systems were “testing target identification capabilities”, indicating the likely lethal applications of some pillar two technologies.</p>
<h2>Where does NZ stand now?</h2>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Former disarmament minister Phil Twyford.</span>
<span class="attribution"><span class="source">Getty Images</span></span>
</figcaption>
</figure>
<p>While some clarity is beginning to emerge on the technologies being explored under pillar two, New Zealand’s policy approach to these types of technologies has become increasingly murky.</p>
<p>Following advocacy by the former minister for disarmament and arms control, Phil Twyford, cabinet committed to supporting international regulations and bans on AWS in late 2021.</p>
<p>When Twyford <a href="https://www.beehive.govt.nz/release/government-commits-international-effort-ban-and-regulate-killer-robots">announced the policy</a>, he declared the emergence of lethal AWS would be “abhorrent and inconsistent with New Zealand’s interests and values”, and would have “significant implications for global peace and security”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/if-aukus-is-all-about-nuclear-submarines-how-can-it-comply-with-nuclear-non-proliferation-treaties-a-law-scholar-explains-201760">If AUKUS is all about nuclear submarines, how can it comply with nuclear non-proliferation treaties? A law scholar explains</a>
</strong>
</em>
</p>
<hr>
<p>Yet <a href="https://www.beehive.govt.nz/sites/default/files/2021-11/Autonomous-Weapons-Systems-Cabinet-paper.pdf">the cabinet paper</a> itself contained significant caveats. These were aimed at allowing for maintenance of interoperability with key defence partners, and ensuring the New Zealand tech sector could continue to pursue “the responsible development and use of AI”.</p>
<p>Twyford’s leadership on this policy position is important given the loss of his ministerial role following Chris Hipkins’ first <a href="https://www.rnz.co.nz/news/political/483394/prime-minister-chris-hipkins-reveals-cabinet-reshuffle">cabinet reshuffle</a> as prime minister. Whether the approach outlined in the 2021 cabinet paper survives his demotion is not yet clear.</p>
<p>Thus far, his successor in the disarmament and arms control role, Nanaia Mahuta, has made no statements on AWS policy.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Defence Minister Andrew Little: interest in collaboration on cybersecurity, quantum computing and AI.</span>
<span class="attribution"><span class="source">Getty Images</span></span>
</figcaption>
</figure>
<h2>Interests and values</h2>
<p>Given these developments, Andrew Little’s openness to considering pillar two cooperation under AUKUS takes on an interesting complexion and raises numerous questions.</p>
<p>Some have <a href="https://www.newshub.co.nz/home/shows/2023/05/newshub-nation-political-panel-discuss-defence-force-funding-and-aukus.html">suggested</a> the defence minister has moderated his original comments on openness to pillar two, perhaps having faced some pushback from the prime minister and foreign minister. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/progress-in-detection-tech-could-render-submarines-useless-by-the-2050s-what-does-it-mean-for-the-aukus-pact-201187">Progress in detection tech could render submarines useless by the 2050s. What does it mean for the AUKUS pact?</a>
</strong>
</em>
</p>
<hr>
<p>Most recently, <a href="https://asia.nikkei.com/Politics/International-relations/Indo-Pacific/New-Zealand-interested-in-AUKUS-cooperation-in-non-nuclear-tech">Little has emphasised</a> the uncertainty around what New Zealand could offer under pillar two. But he has maintained there was an interest in collaboration on cybersecurity, quantum computing and artificial intelligence.</p>
<p>The recent tests of military AI technologies by the AUKUS partners, and the associated comments on their likely military purposes, point to the likelihood of various combinations of lethal and autonomous capabilities emerging from pillar two cooperation.</p>
<p>Before making any commitment to engaging in this part of the AUKUS arrangement, New Zealand’s political leaders need to carefully consider if these technologies are in keeping with the “interests and values” behind Phil Twyford’s initial push toward banning or regulating AWS.</p><img src="https://counter.theconversation.com/content/207201/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jeremy Moses receives funding from Royal Society of New Zealand Marsden Fund. </span></em></p><p class="fine-print"><em><span>Sian Troath receives funding from The Royal Society of New Zealand Marsden Fund.</span></em></p>While the technologies being explored under ‘pillar two’ of the AUKUS security pact are becoming clearer, New Zealand’s policy on autonomous weapons and military AI has become increasingly murky.Jeremy Moses, Associate Professor in International Relations, University of CanterburySian Troath, Postdoctoral fellow, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2046192023-04-28T04:33:57Z2023-04-28T04:33:57ZThe defence review fails to address the third revolution in warfare: artificial intelligence<figure><img src="https://images.theconversation.com/files/523372/original/file-20230428-15-gy0qd6.jpeg?ixlib=rb-1.1.0&rect=102%2C56%2C7498%2C4102&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Throughout history, war has been irrevocably changed by the advent of new technologies. Historians of war have identified several technological revolutions.</p>
<p>The first was the <a href="https://www.brown.edu/Departments/Joukowsky_Institute/courses/13things/7687.html">invention of gunpowder</a> by people in ancient China. It gave us muskets, rifles, machine guns and, eventually, all manner of explosive ordnance. It’s uncontroversial to claim gunpowder completely transformed how we fought war. </p>
<p>Then came the invention of the nuclear bomb, raising the stakes higher than ever. Wars could be ended with just a single weapon, and life as we know it could be ended by a single nuclear stockpile.</p>
<p>And now, war has – like so many other aspects of life – entered the age of automation. AI will cut through the “fog of war”, transforming where and how we fight. Small, cheap and increasingly capable uncrewed systems will replace large, expensive, crewed weapon platforms.</p>
<p>We’ve seen the beginnings of this in Ukraine, where sophisticated armed home-made drones <a href="https://www.bbc.com/news/technology-65389215">are being developed</a>, where Russia is <a href="https://www.hrw.org/news/2022/03/29/ukraine-russia-uses-banned-antipersonnel-landmines">using AI “smart” mines</a> that explode when they detect footsteps nearby, and where Ukraine successfully used autonomous “drone” boats in a major attack on the <a href="https://theconversation.com/ukraine-how-uncrewed-boats-are-changing-the-way-wars-are-fought-at-sea-201606">Russian navy at Sevastopol</a>.</p>
<p>We also see this revolution occurring in our own forces in Australia. And all of this raises the question: why has the government’s recent defence strategic review failed to seriously consider the implications of AI-enabled warfare?</p>
<h2>AI has crept into Australia’s military</h2>
<p>Australia already has a range of autonomous weapons and vessels that can be deployed in conflict. </p>
<p>Our air force expects to acquire a number of 12 metre-long uncrewed <a href="https://www.airforce.gov.au/our-work/projects-and-programs/ghost-bat">Ghost Bat</a> aircraft to ensure our very expensive F-35 <a href="https://www.airforce.gov.au/our-work/projects-and-programs/ghost-bat">fighter jets</a> aren’t made sitting ducks by advancing technologies. </p>
<p>On the sea, the defence force has been testing a new type of uncrewed surveillance vessel called <a href="https://www.minister.defence.gov.au/media-releases/2023-03-06/first-ocius-bluebottle-uncrewed-surface-vessels-adf">the Bluebottle</a>, developed by local company Ocius. And under the sea, Australia is building a prototype six metre-long Ghost Shark <a href="https://www.defence.gov.au/news-events/news/2022-12-14/ghost-shark-stealthy-game-changer">uncrewed submarine</a>. </p>
<p>It also looks set to be developing many more technologies like this in the future. The government’s <a href="https://www.theaustralian.com.au/nation/defence/3bn-accelerator-puts-war-hitech-on-fast-track/news-story/4b4cabf8e40b37ef687d30ce3ea121d0">just announced A$3.4 billion defence innovation “accelerator”</a> will aim to get cutting-edge military technologies, including hypersonic missiles, directed energy weapons and autonomous vehicles, into service sooner.</p>
<p>How then do AI and autonomy fit into our larger strategic picture?</p>
<p>The recent defence strategy review is the latest analysis of whether Australia has the necessary defence capability, posture and preparedness to defend its interests through the next decade and beyond. You’d expect AI and autonomy would be a significant concern – especially since the review recommends <a href="https://www.afr.com/politics/federal/defence-rejig-costs-budget-19b-and-rising-20230424-p5d2qw">spending a not insignificant A$19 billion</a> over the next four years. </p>
<p>Yet the review mentions autonomy only twice (both times in the context of existing weapons systems) and AI once (as one of the four pillars of the AUKUS submarine program). </p>
<h2>Countries are preparing for the third revolution</h2>
<p>Around the world, major powers have made it clear they consider AI a central component of the planet’s military future. </p>
<p>The House of Lords in the United Kingdom is holding a <a href="https://committees.parliament.uk/committee/646/ai-in-weapon-systems-committee/">public inquiry</a> into the use of AI in weapons systems. In Luxembourg, the government just hosted an <a href="https://www.laws-conference.lu/">important conference</a> on autonomous weapons. And China has announced its intention to become the world leader in AI by 2030. Its New Generation AI Development Plan <a href="https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/">proclaims</a> “AI is a strategic technology that will lead the future”, both in a military and economic sense.</p>
<p>Similarly, Russian President Vladimir Putin has <a href="https://edition.cnn.com/2017/09/01/world/putin-artificial-intelligence-will-rule-world/index.html">declared that</a> “whoever becomes the leader in this sphere will become ruler of the world” – while the United States has <a href="https://usacac.army.mil/sites/default/files/publications/17855.pdf">adopted a</a> “third offset strategy” that will invest heavily in AI, autonomy and robotics. </p>
<p>Unless we give more focus to AI in our military strategy, we risk being left fighting wars with outdated technologies. Russia saw the painful consequences of this last year, when its missile cruiser Moscova, the flagship of the Black Sea fleet, <a href="https://www.bbc.com/news/world-europe-61103927">was sunk</a> after being distracted by a drone. </p>
<h2>Future regulation</h2>
<p>Many people (including myself) hope autonomous weapons will soon be regulated. I was invited as an expert witness to an intergovernmental <a href="https://www.amnesty.org/en/latest/news/2023/02/more-than-30-countries-call-for-international-legal-controls-on-killer-robots/">meeting in Costa Rica</a> earlier this year, where 30 Latin and Central American nations called for regulation – many for the first time. </p>
<p>Regulation will hopefully ensure meaningful human control is maintained over autonomous weapon systems (although we’re yet to agree on what “meaningful control” will look like).</p>
<p>But regulation won’t make AI go away. We can still expect to see AI, and some levels of autonomy, as vital components in our defence in the near future.</p>
<p>There are instances, such as in minefield clearing, where autonomy is highly desirable. Indeed, AI will be very useful in managing the information space and in military logistics (where its use won’t be subject to the ethical challenges posed in other settings, such as when using lethal autonomous weapons).</p>
<p>At the same time, autonomy will create strategic challenges. For instance, it will change the geopolitical order alongside lowering costs and scaling forces. Turkey is, for example, becoming a <a href="https://www.aspistrategist.org.au/has-turkey-become-an-armed-drone-superpower/">major drone superpower</a>. </p>
<h2>We need to prepare</h2>
<p>Australia needs to consider how it might defend itself in an AI-enabled world, where terrorists or rogue states can launch swarms of drones against us – and where it might be impossible to determine the attacker. A review that ignores all of this leaves us woefully unprepared for the future. </p>
<p>We also need to engage more constructively in ongoing diplomatic discussions about the use of AI in warfare. Sometimes the best defence is to be found in the political arena, and not the military one.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/bet-youre-on-the-list-how-criticising-smart-weapons-got-me-banned-from-russia-185399">'Bet you're on the list': how criticising 'smart weapons' got me banned from Russia</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/204619/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh receives funding from the Australian Research Council as an ARC Laureate Fellow. He has been banned indefinitely from Russia for his outspoken criticism of Russia's use of AI weapons in Ukraine. </span></em></p>AI is going to fundamentally transform how nations wage far. By failing to address it, the defence review leaves Australia unprepared for the future of war.Toby Walsh, Professor of AI, Research Group Leader, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1987252023-02-21T13:24:17Z2023-02-21T13:24:17ZWar in Ukraine accelerates global drive toward killer robots<figure><img src="https://images.theconversation.com/files/510915/original/file-20230217-593-z3je8t.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4021%2C2924&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It wouldn't take much to turn this remotely operated mobile machine gun into an autonomous killer robot.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Marine_Corps_Warfighting_Laboratory_MAGTAF_Integrated_Experiment_(MCWL)_160709-M-OB268-165.jpg">Pfc. Rhita Daniel, U.S. Marine Corps</a></span></figcaption></figure><p>The U.S. military is intensifying its commitment to the development and use of autonomous weapons, as confirmed by an update to a <a href="https://www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/">Department of Defense directive</a>. The update, released Jan. 25, 2023, is the first in a decade to focus on artificial intelligence autonomous weapons. It follows a related <a href="https://www.nato.int/cps/en/natohq/official_texts_208376.htm">implementation plan</a> released by NATO on Oct. 13, 2022, that is aimed at preserving the alliance’s “technological edge” in what are sometimes called “killer robots.” </p>
<p>Both announcements reflect a crucial lesson militaries around the world have learned from recent combat operations in <a href="https://www.pbs.org/newshour/world/drone-advances-amid-war-in-ukraine-could-bring-fighting-robots-to-front-lines#:%7E:text=Utah%2Dbased%20Fortem%20Technologies%20has,them%20%E2%80%94%20all%20without%20human%20assistance.">Ukraine</a> and <a href="https://foreignpolicy.com/2021/03/30/army-pentagon-nagorno-karabakh-drones/">Nagorno-Karabakh</a>: Weaponized artificial intelligence is the future of warfare.</p>
<p>“We know that commanders are seeing a military value in loitering munitions in Ukraine,” Richard Moyes, director of <a href="https://article36.org/">Article 36</a>, a humanitarian organization focused on reducing harm from weapons, told me in an interview. These weapons, which are a cross between a bomb and a drone, can hover for extended periods while waiting for a target. For now, such semi-autonomous missiles are generally being operated with significant human control over key decisions, he said. </p>
<h2>Pressure of war</h2>
<p>But as casualties mount in Ukraine, so does the pressure to achieve decisive battlefield advantages with fully autonomous weapons – robots that can choose, hunt down and attack their targets all on their own, without needing any human supervision.</p>
<p>This month, a key Russian manufacturer <a href="https://www.defenseone.com/technology/2023/01/russian-robot-maker-working-bot-target-abrams-leopard-tanks/382288/">announced plans</a> to develop a new combat version of its Marker reconnaissance robot, an uncrewed ground vehicle, to augment existing forces in Ukraine. Fully autonomous drones are already being used to <a href="https://www.forbes.com/sites/katyasoldak/2023/01/27/friday-january-27-russias-war-on-ukraine-daily-news-and-information-from-ukraine/">defend Ukrainian energy facilities</a> from other drones. Wahid Nawabi, CEO of the U.S. defense contractor that manufactures the semi-autonomous <a href="https://www.avinc.com/tms/switchblade">Switchblade drone</a>, said the technology is <a href="https://apnews.com/article/russia-ukraine-war-drone-advances-6591dc69a4bf2081dcdd265e1c986203">already within reach</a> to convert these weapons to become fully autonomous. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1446461845070549008"}"></div></p>
<p>Mykhailo Fedorov, Ukraine’s digital transformation minister, has argued that fully autonomous weapons are the war’s “<a href="https://abcnews.go.com/Technology/wireStory/drone-advances-ukraine-bring-dawn-killer-robots-96112651">logical and inevitable next step</a>” and recently said that soldiers might see them on the battlefield in the next six months. </p>
<p>Proponents of fully autonomous weapons systems <a href="https://news.northeastern.edu/2019/11/15/autonomous-weapons-systems-the-utilize-artificial-intelligence-are-changing-the-nature-of-warfare-but-theres-a-problem/#_ga=2.7414138.976428111.1676666580-169995920.1676666580">argue that the technology will keep soldiers out of harm’s way</a> by keeping them off the battlefield. They will also allow for military decisions to be made at superhuman speed, allowing for radically improved defensive capabilities. </p>
<p>Currently, semi-autonomous weapons, like loitering munitions that track and detonate themselves on targets, require a “human in the loop.” They can recommend actions but require their operators to initiate them. </p>
<p>By contrast, fully autonomous drones, like the so-called “<a href="https://fortemtech.com/products/dronehunter-f700/">drone hunters</a>” now <a href="https://u24.gov.ua/news/shahed_hunters_defenders">deployed in Ukraine</a>, can track and disable incoming unmanned aerial vehicles day and night, with no need for operator intervention and faster than human-controlled weapons systems. </p>
<h2>Calling for a timeout</h2>
<p>Critics like <a href="https://www.stopkillerrobots.org/stop-killer-robots/facts-about-autonomous-weapons/">The Campaign to Stop Killer Robots</a> have been advocating for more than a decade to ban research and development of autonomous weapons systems. They point to a future where autonomous weapons systems are designed specifically to target humans, not just vehicles, infrastructure and other weapons. They argue that wartime decisions over life and death must remain in human hands. Turning them over to an algorithm amounts to the ultimate form of <a href="https://www.stopkillerrobots.org/stop-killer-robots/digital-dehumanisation/">digital dehumanization</a>.</p>
<p>Together with <a href="https://www.hrw.org/topic/arms/killer-robots">Human Rights Watch</a>, The Campaign to Stop Killer Robots argues that autonomous weapons systems lack the human judgment necessary to distinguish between civilians and legitimate military targets. They also lower the threshold to war by reducing the perceived risks, and they erode meaningful human control over what happens on the battlefield. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a soldier crouches on the ground peering into a black box as to small projectiles with wings are launched from tubes on either side of him" src="https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=662&fit=crop&dpr=1 600w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=662&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=662&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=831&fit=crop&dpr=1 754w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=831&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=831&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This composite image shows a ‘Switchblade’ loitering munition drone launching from a tube and extending its folded wings.</span>
<span class="attribution"><a class="source" href="https://madsciblog.tradoc.army.mil/wp-content/uploads/2021/06/Switchblade.jpg">U.S. Army AMRDEC Public Affairs</a></span>
</figcaption>
</figure>
<p>The organizations argue that the militaries <a href="https://research.northeastern.edu/autonomous-weapons-systems-the-utilize-artificial-intelligence-are-changing-the-nature-of-warfare-but-theres-a-problem-2/#:%7E:text=They%20found%20that%20there%20are,dollars%20into%20this%20arms%20race.">investing most heavily</a> in autonomous weapons systems, including the U.S., Russia, China, South Korea and the European Union, are launching the world into a costly and destabilizing new arms race. One consequence could be this dangerous new technology falling into the <a href="https://www.brookings.edu/wp-content/uploads/2021/11/FP_20211122_ai_nonstate_actors_kreps.pdf">hands of terrorists and others outside of government control</a>.</p>
<p>The updated Department of Defense directive tries to address some of the key concerns. It declares that the U.S. will use autonomous weapons systems with “<a href="https://www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/">appropriate levels of human judgment over the use of force</a>.” Human Rights Watch <a href="https://www.hrw.org/news/2023/02/14/review-2023-us-policy-autonomy-weapons-systems">issued a statement</a> saying that the new directive fails to make clear what the phrase “appropriate level” means and doesn’t establish guidelines for who should determine it.</p>
<p>But as Gregory Allen, an expert from the national defense and international relations think tank <a href="https://www.csis.org/">Center for Strategic and International Studies</a>, argues, this language <a href="https://www.forbes.com/sites/davidhambling/2023/01/31/what-is-the-pentagons-updated-policy-on-killer-robots/">establishes a lower threshold</a> than the “meaningful human control” demanded by critics. The Defense Department’s wording, he points out, allows for the possibility that in certain cases, such as with surveillance aircraft, the level of human control considered appropriate “may be little to none.” </p>
<p>The updated directive also includes language promising ethical use of autonomous weapons systems, specifically by establishing a system of oversight for developing and employing the technology, and by insisting that the weapons will be used in accordance with existing international laws of war. But Article 36’s Moyes noted that international law currently does not provide an adequate framework for understanding, much less regulating, the concept of weapon autonomy. </p>
<p>The current legal framework does not make it clear, for instance, that commanders are responsible for understanding what will trigger the systems that they use, or that they must limit the area and time over which those systems will operate. “The danger is that there is not a bright line between where we are now and where we have accepted the unacceptable,” said Moyes.</p>
<h2>Impossible balance?</h2>
<p>The Pentagon’s update demonstrates a simultaneous commitment to deploying autonomous weapons systems and to complying with international humanitarian law. How the U.S. will balance these commitments, and if such a balance is even possible, remains to be seen. </p>
<p>The International Committee of the Red Cross, the custodian of international humanitarian law, insists that the legal obligations of commanders and operators “<a href="https://www.icrc.org/en/document/reflections-70-years-geneva-conventions-and-challenges-ahead">cannot be transferred to a machine, algorithm or weapon system</a>.” Right now, human beings are held responsible for protecting civilians and limiting combat damage by making sure the use of force is proportional to military objectives.</p>
<p>If and when artificially intelligent weapons are deployed on the battlefield, who should be held responsible when needless civilian deaths occur? There isn’t a clear answer to that very important question.</p><img src="https://counter.theconversation.com/content/198725/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>I am not connected to Article 36 in any capacity, nor have I received any funding from them. I did write a short opinion/policy piece on AWS that was posted on their website.</span></em></p>The technology exists to build autonomous weapons. How well they would work and whether they could be adequately controlled are unknown. The Ukraine war has only turned up the pressure.James Dawes, Professor of English, Macalester CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1921702022-10-16T19:02:23Z2022-10-16T19:02:23Z‘Killer robots’ will be nothing like the movies show – here’s where the real threats lie<figure><img src="https://images.theconversation.com/files/489521/original/file-20221013-12-lm966h.jpg?ixlib=rb-1.1.0&rect=143%2C201%2C1386%2C862&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Ghost Robotics Vision 60 Q-UGV.</span> <span class="attribution"><a class="source" href="https://www.dvidshub.net/image/7351259/ghost-robotics-vision-60-q-ugv-demo">US Space Force photo by Senior Airman Samuel Becker</a></span></figcaption></figure><p>You might suppose Hollywood is good at predicting the future. Indeed, Robert Wallace, head of the CIA’s Office of Technical Service and the US equivalent of MI6’s fictional Q, has recounted how Russian spies <a href="https://www.popularmechanics.com/military/a12043/4267549/">would watch the latest Bond movie</a> to see what technologies might be coming their way.</p>
<p>Hollywood’s continuing obsession with killer robots might therefore be of significant concern. The newest such movie is Apple TV’s forthcoming <a href="https://www.thewrap.com/florence-pugh-dolly-movie-murderous-sex-robot-apple-tv-plus/">sex robot courtroom drama Dolly</a>.</p>
<p>I never thought I’d write the phrase “sex robot courtroom drama”, but there you go. Based on a <a href="https://apex-magazine.com/short-fiction/dolly/">2011 short story</a> by Elizabeth Bear, the plot concerns a billionaire killed by a sex robot that then asks for a lawyer to defend its murderous actions.</p>
<h2>The real killer robots</h2>
<p>Dolly is the latest in a long line of movies featuring killer robots – including HAL in Kubrick’s 2001: A Space Odyssey, and Arnold Schwarzenegger’s T-800 robot in the Terminator series. Indeed, conflict between robots and humans was at the centre of the very first feature-length science fiction film, Fritz Lang’s 1927 classic <a href="https://www.britannica.com/topic/Metropolis-film-1927">Metropolis</a>.</p>
<p>But almost all these movies get it wrong. Killer robots won’t be sentient humanoid robots with evil intent. This might make for a dramatic storyline and a box office success, but such technologies are many decades, if not centuries, away.</p>
<p>Indeed, contrary to recent fears, robots may never be sentient.</p>
<p>It’s much simpler technologies we should be worrying about. And these technologies are starting to turn up on the battlefield today in places like Ukraine and <a href="https://www.militarystrategymagazine.com/article/drones-in-the-nagorno-karabakh-war-analyzing-the-data/">Nagorno-Karabakh</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/drones-over-ukraine-fears-of-russian-killer-robots-have-failed-to-materialise-180244">Drones over Ukraine: fears of Russian 'killer robots' have failed to materialise</a>
</strong>
</em>
</p>
<hr>
<h2>A war transformed</h2>
<p>Movies that feature much simpler armed drones, like Angel has Fallen (2019) and Eye in the Sky (2015), paint perhaps the most accurate picture of <a href="https://theconversation.com/eye-in-the-sky-movie-gives-a-real-insight-into-the-future-of-warfare-56684">the real future of killer robots</a>. </p>
<p>On the nightly TV news, we see how modern warfare is being transformed by ever-more autonomous drones, tanks, ships and submarines. These robots are only a little more sophisticated than those you can buy in your local hobby store. </p>
<p>And increasingly, the decisions to identify, track and destroy targets are being handed over to their algorithms. </p>
<p>This is taking the world to a dangerous place, with a host of moral, legal and technical problems. Such weapons will, for example, further upset our troubled geopolitical situation. We already see <a href="https://www.forbes.com/sites/amirhusain/2022/06/30/turkey-builds-a-hyperwar-capable-military/?sh=1500c4b855e1">Turkey emerging as a major drone power</a>.</p>
<p>And such weapons cross a moral red line into a terrible and terrifying world where unaccountable machines decide who lives and who dies. </p>
<p>Robot manufacturers are, however, starting to push back against this future.</p>
<h2>A pledge not to weaponise</h2>
<p>Last week, six leading robotics companies pledged they would <a href="https://www.theguardian.com/technology/2022/oct/07/killer-robots-companies-pledge-no-weapons">never weaponise their robot platforms</a>. The companies include Boston Dynamics, which makes the Atlas humanoid robot, which can <a href="https://youtu.be/knoOXBLFQ-s">perform an impressive backflip</a>, and the Spot robot dog, which looks like it’s <a href="https://youtu.be/wlkCQXHEgjA">straight out of the Black Mirror TV series</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1578400002056953858"}"></div></p>
<p>This isn’t the first time robotics companies have spoken out about this worrying future. Five years ago, I organised <a href="https://newsroom.unsw.edu.au/news/science-tech/world%E2%80%99s-tech-leaders-urge-un-ban-killer-robots">an open letter</a> signed by Elon Musk and more than 100 founders of other AI and robot companies calling for the United Nations to regulate the use of killer robots. The letter even knocked the Pope into third place for a <a href="https://newsroom.unsw.edu.au/news/science-tech/unsws-toby-walsh-voted-runner-global-award">global disarmament award</a>.</p>
<p>However, the fact that leading robotics companies are pledging not to weaponise their robot platforms is more virtue signalling than anything else.</p>
<p>We have, for example, already seen <a href="https://www.vice.com/en/article/m7gv33/robot-dog-not-so-cute-with-submachine-gun-strapped-to-its-back">third parties mount guns</a> on clones of Boston Dynamics’ Spot robot dog. And such modified robots have proven effective in action. Iran’s top nuclear scientist was <a href="https://www.nytimes.com/2021/09/18/world/middleeast/iran-nuclear-fakhrizadeh-assassination-israel.html">assassinated by Israeli agents</a> using a robot machine gun in 2020.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lethal-autonomous-weapons-and-world-war-iii-its-not-too-late-to-stop-the-rise-of-killer-robots-165822">Lethal autonomous weapons and World War III: it's not too late to stop the rise of 'killer robots'</a>
</strong>
</em>
</p>
<hr>
<h2>Collective action to safeguard our future</h2>
<p>The only way we can safeguard against this terrifying future is if nations collectively take action, as they have with chemical weapons, biological weapons and even nuclear weapons.</p>
<p>Such regulation won’t be perfect, just as the regulation of chemical weapons isn’t perfect. But it will prevent arms companies from openly selling such weapons and thus their proliferation. </p>
<p>Therefore, it’s even more important than a pledge from robotics companies to see the UN Human Rights council <a href="https://www.ohchr.org/en/news/2022/10/human-rights-council-adopts-six-resolutions-appoints-special-rapporteur-situation">has recently unanimously decided</a> to explore the human rights implications of new and emerging technologies like autonomous weapons. </p>
<p>Several dozen nations have already called for the UN to regulate killer robots. The European Parliament, the African Union, the UN Secretary General, Nobel peace laureates, church leaders, politicians and thousands of AI and robotics researchers like myself have all called for regulation. </p>
<p>Australian is not a country that has, so far, supported these calls. But if you want to avoid this Hollywood future, you may want to take it up with your political representative next time you see them.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/new-zealand-could-take-a-global-lead-in-controlling-the-development-of-killer-robots-so-why-isnt-it-166168">New Zealand could take a global lead in controlling the development of 'killer robots' — so why isn't it?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/192170/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The sentient, murderous humanoid robot is a complete fiction, and may never become reality. But that doesn’t mean we’re safe from autonomous weapons – they are already here.Toby Walsh, Professor of AI at UNSW, Research Group Leader, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1889832022-08-22T05:24:42Z2022-08-22T05:24:42ZVirtual reality, autonomous weapons and the future of war: military tech startup Anduril comes to Australia<figure><img src="https://images.theconversation.com/files/480252/original/file-20220822-65285-8lcycw.png?ixlib=rb-1.1.0&rect=17%2C6%2C1479%2C992&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Anduril</span></span></figcaption></figure><p>Earlier this month, posters started going up around Sydney advertising an event called “In the Ops Room, with Palmer Luckey”. Rather than an album launch or standup gig, this turned out to be a free talk given last week by the chief executive of a high-tech US defence company called Anduril.</p>
<p>The company has set up an Australian arm, and Luckey is in town to <a href="https://prwire.com.au/pr/104375/from-oculus-rift-to-military-tech-palmer-luckey-is-in-australia">entice</a> “brilliant technologists in military engineering” to sign on. </p>
<p>Anduril makes a software system called <a href="https://www.anduril.com/lattice/">Lattice</a>, an “autonomous sensemaking and command & control platform” with a strong surveillance focus which is used on <a href="https://www.washingtonpost.com/national-security/2022/03/11/mexico-border-surveillance-towers/">the US–Mexico border</a>. The company also produces <a href="https://www.cnet.com/science/palmer-luckey-ghost-4-military-drones-can-swarm-into-an-ai-surveillance-system/">flying drones</a> and has a deal to produce <a href="https://www.theguardian.com/australia-news/2022/aug/17/robotic-submarines-fast-tracked-for-sydney-harbour-to-bridge-defence-capability-gap">three robotic submarines</a> for Australia, with <a href="https://www.anduril.com/hardware/dive-ld/">capabilities</a> for surveillance, reconnaissance, and warfare. </p>
<p>The PR splash is unusual from the normally secretive world of military technology. But Luckey’s talk opened a window onto the future as seen <a href="https://www.anduril.com/mission/">by a company</a> “transforming US & allied military capabilities with advanced technology”.</p>
<h2>From Oculus to Anduril</h2>
<figure class="align-right ">
<img alt="a poster advertising the Luckey talk, pasted to an electricity box on a street in inner Sydney" src="https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=731&fit=crop&dpr=1 600w, https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=731&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=731&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=919&fit=crop&dpr=1 754w, https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=919&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=919&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">One of the posters advertising the Anduril talk in Sydney.</span>
<span class="attribution"><span class="source">Photo by Julia Scott-Stevenson</span></span>
</figcaption>
</figure>
<p>Unlike most defence tech moguls, Luckey got his start in the world of immersive tech and gaming. </p>
<p>While at college, the Anduril founder had <a href="https://www.latimes.com/entertainment/herocomplex/la-et-hc-palmer-luckey-s-oculus-rift-could-be-a-virtual-reality-breakthrough-20160326-story.html">a brief stint</a> at a military-affiliated mixed reality research lab at the University of Southern California, then set up his own virtual reality headset company called Oculus VR. In 2014, at the age of 21, Luckey sold Oculus to Facebook for US$2 billion.</p>
<p>In 2017 Luckey was fired by Facebook for reasons that were never made public. According to <a href="https://www.cnet.com/tech/tech-industry/facebook-reportedly-fired-palmer-luckey-for-political-views/">some reports</a>, the issue was Luckey’s support for the presidential campaign of Donald Trump. </p>
<p>Luckey’s next move, with backing from right-wing venture capitalist Peter Thiel’s Founder’s Fund, was to <a href="https://www.forbes.com/sites/jeremybogaisky/2022/06/03/palmer-luckey-anduril/">set up Anduril</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1006019397280862210"}"></div></p>
<h2>Finding new markets</h2>
<p>Since Luckey’s departure, Facebook (now known as Meta) has broadened its efforts beyond the virtual and augmented reality market. A forthcoming <a href="https://www.techradar.com/news/project-cambria-is-metas-most-important-vr-headset-right-now">“mixed reality” headset</a> plays a key role in its plans for a metaverse being pitched to business and industry as well as consumers.</p>
<p>We can see similar pivots from consumers to enterprise across the immersive tech industry. Magic Leap, makers of a much hyped mixed-reality headset, later imploded and re-emerged <a href="https://www.theverge.com/2020/6/16/21274638/magic-leap-app-store-partnerships-update">focusing on healthcare</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/potential-for-harm-microsoft-to-make-us-22-billion-worth-of-augmented-reality-headsets-for-us-army-158308">'Potential for harm': Microsoft to make US$22 billion worth of augmented reality headsets for US Army</a>
</strong>
</em>
</p>
<hr>
<p>Microsoft’s mixed-reality headset, the HoloLens, was initially seen at <a href="https://docubase.mit.edu/project/terminal-3/">international film festivals</a>. However, the HoloLens 2, released in 2019, was marketed solely to businesses. </p>
<p>Then, in 2021, Microsoft won a ten-year, US$22 billion contract to provide the US Army with <a href="https://theconversation.com/potential-for-harm-microsoft-to-make-us-22-billion-worth-of-augmented-reality-headsets-for-us-army-158308">120,000 head-mounted displays</a>. Known as “Integrated Visual Augmentation Systems”, these headsets include a range of technologies such as thermal sensors, a heads-up display and machine learning for training situations.</p>
<h2>Fulfilling work?</h2>
<p>Speaking to the Sydney audience on Thursday, Luckey framed his own shift to defence not as one of economic necessity, but of personal fulfilment. He described saying “your job is worthless” to new recruits in social media companies making games or augmented reality filters. </p>
<p>That kind of work is fun but ultimately meaningless, he says, whereas working for Anduril would be “professionally fulfilling, spiritually fulfilling, fiscally fulfilling”. </p>
<p>Not all technology workers would agree that defence contracts are spiritually fulfilling. In 2018, Google employees revolted <a href="https://www.fastcompany.com/40578996/the-threat-of-weaponized-a-i-is-tearing-google-apart">against Project Maven</a>, an AI effort for the Pentagon. Staff at <a href="https://www.theverge.com/2019/2/22/18236116/microsoft-HoloLens-army-contract-workers-letter">Microsoft</a> and <a href="https://www.vice.com/en/article/xgx3ww/unity-ceo-promises-employees-their-work-will-never-lead-to-loss-of-life">Unity</a> have also expressed consternation over military involvement. </p>
<h2>‘Billions of robots’</h2>
<p>The first audience question on Thursday asked Luckey about the risks of autonomous AI – weapons run by software that can make its own decisions. </p>
<p>Luckey said he was worried about the potential of autonomy to do “really spooky things”, but much more concerned about “very evil people using very basic AI”. He suggested there was no moral high ground in refusing to work on autonomous weapons, as the alternative was “less principled people” working on them. </p>
<p>Luckey did say Anduril will always have a “human in the loop”: “[The software] is not making any life or death decisions without a person who’s directly responsible for that happening.” </p>
<p>This may be current policy, but it seems at odds with Luckey’s vision of the future of war. Earlier in the evening, he painted a picture:</p>
<blockquote>
<p>You’re going to see much larger numbers of systems [in conflicts] … you can’t have, let’s say, billions of robots that are all acting together, if they all have to be individually piloted directly by a person, it’s just not going to work, so autonomy is going to be critical for that.</p>
</blockquote>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616">UN fails to agree on 'killer robot' ban as nations pour billions into autonomous weapons research</a>
</strong>
</em>
</p>
<hr>
<p>Not everyone is as sanguine about the autonomous weapons arms race as Luckey. Thousands of scientists have <a href="https://www.cnet.com/science/elon-musk-google-deepmind-pledge-no-deadly-ai-autonomous-weapons/">pledged</a> not to develop lethal autonomous weapons.</p>
<p>Australian AI expert Toby Walsh, among others, has <a href="https://www.nytimes.com/2019/07/30/science/autonomous-weapons-artificial-intelligence.html">made the case</a> that “the best time to ban such weapons is before they’re available”.</p>
<h2>Choose your future</h2>
<p>My <a href="https://immerse.news/virtual-futures-a-manifesto-for-immersive-experiences-ffb9d3980f0f">own research</a> has explored the potential of immersive media technologies to help us imagine pathways to a future we want to live in. </p>
<p>Luckey seems to argue he wants the same: a use for these incredible technologies beyond augmented reality cat filters and “worthless” games. Unfortunately his vision of that future is in the zero-sum framing of an arms race, with surveillance and AI weapons at the core (and perhaps even “billions of robots acting together”). </p>
<p>During Luckey’s talk, he mentioned that Anduril Australia is working on other projects beyond the robotic subs, but he couldn’t share what these were. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/australias-pursuit-of-killer-robots-could-put-the-trans-tasman-alliance-with-new-zealand-on-shaky-ground-188520">Australia's pursuit of 'killer robots' could put the trans-Tasman alliance with New Zealand on shaky ground</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/188983/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Julia Scott-Stevenson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Anduril says it is “transforming US & allied military capabilities with advanced technology” – and it’s setting up shop in Australia.Julia Scott-Stevenson, Chancellor's Postdoctoral Research Fellow, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1883162022-08-05T16:44:40Z2022-08-05T16:44:40ZBladed ‘Ninja’ missile used to kill al-Qaida leader is part of a scary new generation of unregulated weapons<p>The recent killing of al-Qaida leader <a href="https://theconversation.com/afghanistan-assassination-of-al-qaida-chief-reveals-tensions-at-the-top-of-the-taliban-188133">Ayman al-Zawahiri </a> by CIA drone strike was the latest <a href="https://www.whitehouse.gov/briefing-room/speeches-remarks/2022/08/01/remarks-by-president-biden-on-a-successful-counterterrorism-operation-in-afghanistan/">US response to 9/11</a>. Politically, it amplified existing distrust between US leaders and the Taliban government in Afghanistan. The killing also exposed compromises in the <a href="https://www.bbc.co.uk/news/world-asia-51689443">2020 Doha peace agreement</a> between the US and the Taliban.</p>
<p>But another story is emerging with wider implications: the speed and nature of international weapons development. Take the weapon reportedly used to kill al-Zawahiri: the <a href="https://www.lemonde.fr/en/international/article/2022/08/03/ayman-al-zawahiri-s-death-what-is-the-hellfire-r9x-missile-that-the-americans-purportedly-used_5992310_4.html">Hellfire R9X “Ninja” missile</a>.</p>
<p>The Hellfire missile was originally conceived in the 1970s and 80s to destroy Soviet tanks. Rapid improvements from the 1990s onwards have <a href="https://www.thedefensepost.com/2021/03/22/agm-114-hellfire-missile/">resulted in multiple variations</a> with different capabilities. They can be launched from helicopters or Reaper drones. Their <a href="https://asc.army.mil/web/portfolio-item/hellfire-family-of-missiles/">different explosive payloads</a> can be set off in different ways: on impact or before impact.</p>
<p>Then there is the Hellfire R9X “Ninja”. It is not new, though it has remained largely in the shadows for five years. It was reportedly used in 2017 in Syria to <a href="https://www.wsj.com/articles/secret-u-s-missile-aims-to-kill-only-terrorists-not-nearby-civilians-11557403411">kill the deputy al-Qaida leader</a>, Abu Khayr al-Masri.</p>
<p>The Ninja missile does not rely on an explosive warhead to destroy or kill its target. It uses the speed, accuracy and kinetic energy of a 100-pound missile fired from up to 20,000 feet, armed with <a href="https://www.thedefensepost.com/2021/03/22/agm-114-hellfire-missile/">six blades</a> which deploy in the last moments before impact.</p>
<h2>‘Super weapons’</h2>
<p>The Ninja missile is the ultimate attempt – thus far – to accurately target and kill a single person. No explosion, no widespread destruction, and no deaths of bystanders. </p>
<p>But other weapon developments will also affect the way we live and how wars are fought or deterred. Russia has <a href="https://www.chathamhouse.org/2021/09/advanced-military-technology-russia/03-putins-super-weapons">invested heavily</a> in these so-called super-weapons, building on older technologies. They aim to reduce or eliminate technological advantages enjoyed by the United States or Nato. </p>
<p>Rusia’s hypersonic missile development aims are highly ambitious. The <a href="https://www.chathamhouse.org/2021/09/advanced-military-technology-russia/03-putins-super-weapons">Avangard</a> missile, for example, won’t need to fly outside the earth’s atmosphere. It will remain within the upper atmosphere instead, giving it the ability to manoeuvre. </p>
<p>Such manoeuvrability will make it harder to detect or intercept. China’s <a href="https://eurasiantimes.com/china-flashes-rare-footage-of-hypersonic-missile-army-day/">DF-17 hypersonic ballistic missile</a> is similarly intended to evade US missile defences.</p>
<h2>The autonomous era</h2>
<p>At a smaller scale, <a href="https://metro.co.uk/2021/10/14/lethal-robot-dogs-now-have-assault-rifles-attached-to-their-backs-15420004/">robot dogs with mounted machine guns</a> are emerging on the weapons market. The weapon development company <a href="https://sword-int.com/the-sword-story/">Sword International</a> took a Ghost Robotics quadrupedal unmanned ground vehicle – or dog robot – and mounted an assault rifle on it. It was one of three robot dogs on <a href="https://www.independent.co.uk/tv/editors-picks/robot-dog-rifle-black-mirror-vf8b11fde">display at a US army trade show</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1449342876408713220"}"></div></p>
<p>Turkey, meanwhile, is claiming it has developed <a href="https://www.trtworld.com/magazine/a-series-of-autonomous-drones-gives-turkey-a-military-edge-47201">four types of autonomous drones</a>, which can identify and kill people, all without input from a human operator, or GPS guidance. According to a <a href="https://documents-dds-ny.un.org/doc/UNDOC/GEN/N21/037/72/PDF/N2103772.pdf?OpenElement">UN report</a> from March 2021, such an autonomous weapon system has been used already in Libya against a logistics convoy affiliated with the Khalifa Haftar armed group.</p>
<p>Autonomous weapons that don’t need GPS guidance are particularly significant. In a future war between major powers, the satellites which provide GPS navigation can expect to be shot down. So any military system or aircraft which relies on GPS signals for navigation or targeting would be rendered ineffective. </p>
<p><a href="https://spacenews.com/pentagon-report-china-amassing-arsenal-of-anti-satellite-weapons/">China</a>, <a href="https://www.bbc.co.uk/news/science-environment-59299101">Russia</a>, India and the <a href="https://www.bbc.co.uk/news/science-environment-59299101">USA</a> have developed weapons to destroy satellites which provide global positioning for car sat-nav systems and civilian aircraft guidance. </p>
<p>The real nightmare scenario is combining these, and many more, weapon systems with artificial intelligence. </p>
<h2>New rules of war</h2>
<p>Are new laws or treaties needed to limit these futuristic weapons? In short, yes but they don’t look likely. The US has <a href="https://www.cnbc.com/2022/04/18/us-to-end-anti-satellite-asat-testing-calls-for-global-agreement.html">called for</a> a global agreement to stop anti-satellite missile testing – but there has been no uptake. </p>
<p>The closest to an agreement is the signing of <a href="https://www.nasa.gov/specials/artemis-accords/index.html">NASA’s Artemis Accords</a>. These are principles to promotes peaceful use of space exploration. But they only apply to “<a href="https://www.nasa.gov/specials/artemis-accords/img/Artemis-Accords-signed-13Oct2020.pdf">civil space activities conducted by the civil space agencies</a>” of the signatory countries. In other words, the agreement does not extend to military space activities or terrestrial battlefields. </p>
<p>In contrast, the US <a href="https://www.defense.gov/News/News-Stories/Article/Article/1924779/us-withdraws-from-intermediate-range-nuclear-forces-treaty/#:%7E:text=The%20United%20States%20has%20officially,the%20nations%20involved%20could%20pursue.">has withdrawn</a> from the Intermediate-Range Nuclear Forces Treaty. This is part of a long-term <a href="https://edition.cnn.com/2019/02/01/politics/nuclear-treaty-trump/index.html">pattern of withdrawal from global agreements</a> by US administrations. </p>
<p>Lethal autonomous weapon systems are a special class of emerging weapon system. They incorporate machine learning and other types of AI so that they can make their own decisions and act without direct human input. In 2014 the International Committee of the Red Cross (ICRC) <a href="https://www.icrc.org/en/document/report-icrc-meeting-autonomous-weapon-systems-26-28-march-2014">brought experts together</a> to identify issues raised by autonomous weapon systems. </p>
<p>In 2020 the ICRC and the Stockholm International Peace Research Institute went further, bringing together international experts to identify what <a href="https://www.sipri.org/media/press-release/2020/new-sipri-and-icrc-report-identifies-necessary-controls-autonomous-weapons">controls on autonomous weapon systems </a> would be needed.</p>
<p>In 2022, discussions are ongoing between countries <a href="https://meetings.unoda.org/meeting/ccw-gge-2019/">the UN first brought together</a> in 2017. This group of governmental experts continues to debate the development and use of lethal autonomous weapon systems. However, there has still been no international agreement on a new law or treaty to limit their use.</p>
<h2>New rules for autonomous weapon systems</h2>
<p>The campaign group, Stop the Killer Robots, has called throughout this period for an <a href="https://www.stopkillerrobots.org/">international ban</a> on lethal autonomous weapon systems. Not only has that not happened, there is an undeclared <a href="https://www.e-ir.info/2020/04/15/introducing-guiding-principles-for-the-development-and-use-of-lethal-autonomous-weapon-systems/">stalemate in the UN’s discussions</a> on autonomous weapons in Geneva. </p>
<p>Australia, Israel, Russia, South Korea and the US have <a href="https://una.org.uk/news/minority-states-block-progress-regulating-killer-robots">opposed a new treaty</a> or political declaration. Opposing them at the same talks, 125 member states of the Non-Aligned Movement are calling for <a href="https://documents.unoda.org/wp-content/uploads/2021/06/NAM.pdf">legally binding restrictions</a> on lethal autonomous weapon systems. With Russia, China, US, UK and France all having a UN Security Council veto, they can prevent such a binding law on autonomous weapons.</p>
<p>Outside these international talks and campaigning organisations, independent experts are proposing alternatives. For example, in 2019 ethicist, Deane-Peter Baker brought together the Canberra Group of independent international. The group produced <a href="https://www.e-ir.info/2020/04/15/guiding-principles-for-the-development-and-use-of-laws-version-1-0/">a report</a>, Guiding Principles for the Development and Use of Lethal Autonomous Weapon Systems.</p>
<p>These principles don’t solve the political impasse between superpowers. But if autonomous weapons are here to stay then it is an early attempt to understand what new rules will be needed.</p>
<p>When Pandora’s mythical box was opened, untold horrors were unleashed on the world. Emerging weapon systems are all too real. Like Pandora, all we are left with is hope.</p><img src="https://counter.theconversation.com/content/188316/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Lee does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>New weapons will require new rules of war – but there is little appetite for regulation.Peter Lee, Professor of Applied Ethics and Director, Security and Risk Research, University of PortsmouthLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1736162021-12-20T13:14:50Z2021-12-20T13:14:50ZUN fails to agree on ‘killer robot’ ban as nations pour billions into autonomous weapons research<figure><img src="https://images.theconversation.com/files/436998/original/file-20211210-27-1o7cvsn.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5763%2C4225&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Humanitarian groups have been calling for a ban on autonomous weapons.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/march-2019-berlin-a-robot-stands-in-front-of-the-news-photo/1131801019">Wolfgang Kumm/picture alliance via Getty Images</a></span></figcaption></figure><p>Autonomous weapon systems – commonly known as killer robots – may have <a href="https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d">killed human beings for the first time ever</a> last year, according to a recent United Nations Security Council <a href="https://undocs.org/S/2021/229">report on the Libyan civil war</a>. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.</p>
<p>The United Nations <a href="https://www.un.org/disarmament/the-convention-on-certain-conventional-weapons/">Convention on Certain Conventional Weapons</a> debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but <a href="https://www.reuters.com/article/us-un-disarmament-idAFKBN2IW1UJ">didn’t reach consensus on a ban</a>. Established in 1983, the convention has been updated regularly to restrict some of the world’s cruelest conventional weapons, including land mines, booby traps and incendiary weapons.</p>
<p>Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are <a href="https://www.newsweek.com/2021/09/24/us-only-nation-ethical-standards-ai-weapons-should-we-afraid-1628986.html">investing heavily</a> in autonomous weapons research and development. The U.S. alone <a href="https://www.scientificamerican.com/article/dont-let-robots-pull-the-trigger/">budgeted US$18 billion</a> for autonomous weapons between 2016 and 2020. </p>
<p>Meanwhile, human rights and <a href="https://www.stopkillerrobots.org/">humanitarian organizations</a> are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, <a href="https://www.rand.org/blog/2020/06/the-risks-of-autonomous-weapons-systems-for-crisis.html">increasing the risk of preemptive attacks</a>, and because they could be <a href="https://foreignpolicy.com/2020/10/14/ai-drones-swarms-killer-robots-partial-ban-on-autonomous-weapons-would-make-everyone-safer/">combined with chemical, biological, radiological and nuclear weapons</a> themselves. </p>
<p>As a <a href="https://scholar.google.com/citations?user=92kUNgwAAAAJ&hl=en&oi=sra">specialist in human rights</a> with a focus on the <a href="https://muse.jhu.edu/article/761349#bio_wrap">weaponization of artificial intelligence</a>, I find that autonomous weapons make the unsteady balances and fragmented safeguards of the nuclear world – for example, the U.S. president’s minimally constrained <a href="https://wwnorton.com/books/thermonuclear-monarchy/">authority to launch a strike</a> – more unsteady and more fragmented. Given the pace of research and development in autonomous weapons, the U.N. meeting might have been the last chance to head off an arms race.</p>
<h2>Lethal errors and black boxes</h2>
<p>I see four primary dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat? </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/fPqmC16ewYg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Killer robots, like the drones in the 2017 short film ‘Slaughterbots,’ have long been a major subgenre of science fiction. (Warning: graphic depictions of violence.)</span></figcaption>
</figure>
<p>The problem here is not that machines will make such errors and humans won’t. It’s that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent <a href="https://www.reuters.com/world/asia-pacific/us-military-says-10-civilians-killed-kabul-drone-strike-last-month-2021-09-17/">U.S. drone strike in Afghanistan</a> seem like mere rounding errors by comparison.</p>
<p>Autonomous weapons expert Paul Scharre uses the metaphor of <a href="https://wwnorton.com/books/Army-of-None/">the runaway gun</a> to explain the difference. A runaway gun is a defective machine gun that continues to fire after a trigger is released. The gun continues to fire until ammunition is depleted because, so to speak, the gun does not know it is making an error. Runaway guns are extremely dangerous, but fortunately they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Autonomous weapons, by definition, have no such safeguard. </p>
<p>Importantly, weaponized AI need not even be defective to produce the runaway gun effect. As multiple studies on algorithmic errors across industries have shown, the very best algorithms – operating as designed – can <a href="https://brianchristian.org/the-alignment-problem/">generate internally correct outcomes that nonetheless spread terrible errors</a> rapidly across populations. </p>
<p>For example, a neural net designed for use in Pittsburgh hospitals identified <a href="https://www.pulmonologyadvisor.com/home/topics/practice-management/the-potential-pitfalls-of-machine-learning-algorithms-in-medicine/">asthma as a risk-reducer</a> in pneumonia cases; image recognition software used by Google <a href="https://www.usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-black-people-as-gorillas/29567465/">identified Black people as gorillas</a>; and a machine-learning tool used by Amazon to rank job candidates <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G">systematically assigned negative scores to women</a>.</p>
<p>The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them. The <a href="https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf">black box problem</a> of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems. </p>
<h2>The proliferation problems</h2>
<p>The next two dangers are the problems of low-end and high-end proliferation. Let’s start with the low end. The militaries developing autonomous weapons now are proceeding on the assumption that they will be able to <a href="https://www.popularmechanics.com/military/research/a23133118/us-ai-robots-warfare/">contain and control the use of autonomous weapons</a>. But if the history of weapons technology has taught the world anything, it’s this: Weapons spread. </p>
<p>Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the <a href="https://www.npr.org/templates/story/story.php?storyId=6539945">Kalashnikov assault rifle</a>: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Front view of a quadcopter showing its camera" src="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=289&fit=crop&dpr=1 600w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=289&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=289&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=364&fit=crop&dpr=1 754w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=364&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=364&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:STM_Kargu.png">Ministry of Defense of Ukraine</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>High-end proliferation is just as bad, however. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of <a href="https://cpr.unu.edu/publications/articles/ai-global-governance-ai-and-nuclear-weapons-promise-and-perils-of-ai-for-nuclear-stability.html">mounting chemical, biological, radiological and nuclear arms</a>. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use.</p>
<p>High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. The weapons are likely to be equipped with expensive <a href="https://smartech.gatech.edu/bitstream/handle/1853/31465/09-02.pdf">ethical governors</a> designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the “<a href="https://news.un.org/en/story/2020/07/1068041">myth of a surgical strike</a>” to quell moral protests. Autonomous weapons will also reduce both the need for and risk to one’s own soldiers, dramatically altering the <a href="https://www.jstor.org/stable/3312365?seq=1#metadata_info_tab_contents">cost-benefit analysis</a> that nations undergo while launching and maintaining wars. </p>
<p>Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. Think about the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the <a href="https://dx.doi.org/10.2139/ssrn.3804885">blowback experienced around the world today</a>. Multiply that by every country currently aiming for high-end autonomous weapons. </p>
<h2>Undermining the laws of war</h2>
<p>Finally, autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties reaching as far back as the 1864 <a href="https://www.law.cornell.edu/wex/geneva_conventions_and_their_additional_protocols">Geneva Convention</a>, are the international thin blue line separating war with honor from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. A prominent example of someone held to account is <a href="https://www.britannica.com/biography/Slobodan-Milosevic">Slobodan Milosevic</a>, former president of the Federal Republic of Yugoslavia, who was indicted on charges of crimes against humanity and war crimes by the U.N.’s International Criminal Tribunal for the Former Yugoslavia.</p>
<p>[<em>Get our best science, health and technology stories.</em> <a href="https://theconversation.com/us/newsletters/science-editors-picks-71/?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=science-best">Sign up for The Conversation’s science newsletter</a>.]</p>
<p>But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious <a href="https://www.hrw.org/news/2020/06/01/need-and-elements-new-treaty-fully-autonomous-weapons#">accountability gap</a>.</p>
<p>To hold a soldier <a href="https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=1011&context=djilp">criminally responsible</a> for deploying an autonomous weapon that commits war crimes, prosecutors would need to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult as a matter of law, and possibly unjust as a matter of morality, given that autonomous weapons are inherently unpredictable. I believe the distance separating the soldier from the independent decisions made by autonomous weapons in rapidly evolving environments is simply too great. </p>
<p>The legal and moral challenge is not made easier by shifting the blame up the chain of command or back to the site of production. In a world without regulations that mandate <a href="https://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/">meaningful human control</a> of autonomous weapons, there will be war crimes with no war criminals to hold accountable. The structure of the laws of war, along with their deterrent value, will be significantly weakened.</p>
<h2>A new global arms race</h2>
<p>Imagine a world in which militaries, insurgent groups and international and domestic terrorists can deploy theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal accountability. It is a world where the sort of unavoidable <a href="https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815">algorithmic errors</a> that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.</p>
<p>[<em>Over 140,000 readers rely on The Conversation’s newsletters to understand the world.</em> <a href="https://memberservices.theconversation.com/newsletters/?source=inline-140ksignup">Sign up today</a>.]</p>
<p>In my view, the world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.</p>
<p><em>This is an updated version of an <a href="https://theconversation.com/an-autonomous-robot-may-have-already-killed-people-heres-how-the-weapons-could-be-more-destabilizing-than-nukes-168049">article</a> originally published on September 29, 2021.</em></p><img src="https://counter.theconversation.com/content/173616/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James Dawes does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Sci-fi nightmares of a robot apocalypse aside, autonomous weapons are a very real threat to humanity. An expert on the weapons explains how the emerging arms race could be humanity’s last.James Dawes, Professor of English, Macalester CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1680492021-09-29T12:23:40Z2021-09-29T12:23:40ZAn autonomous robot may have already killed people – here’s how the weapons could be more destabilizing than nukes<figure><img src="https://images.theconversation.com/files/423433/original/file-20210927-21-wsi2zg.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4928%2C3280&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The term 'killer robot' often conjures images of Terminator-like humanoid robots. Militaries around the world are working on autonomous machines that are less scary looking but no less lethal.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:W-MUTT_-_Ship-to-Shore_Maneuver_Exploration_and_Experimentation_2017_01.jpg">John F. Williams/U.S. Navy</a></span></figcaption></figure><p><em>An updated version of this article was published on Dec. 20, 2021. <a href="https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616">Read it here</a>.</em></p>
<p>Autonomous weapon systems – commonly known as killer robots – may have <a href="https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d">killed human beings for the first time ever</a> last year, according to a recent United Nations Security Council <a href="https://undocs.org/S/2021/229">report on the Libyan civil war</a>. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.</p>
<p>Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are <a href="https://www.newsweek.com/2021/09/24/us-only-nation-ethical-standards-ai-weapons-should-we-afraid-1628986.html">investing heavily</a> in autonomous weapons research and development. The U.S. alone <a href="https://www.scientificamerican.com/article/dont-let-robots-pull-the-trigger/">budgeted US$18 billion</a> for autonomous weapons between 2016 and 2020. </p>
<p>Meanwhile, human rights and <a href="https://www.stopkillerrobots.org/">humanitarian organizations</a> are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, <a href="https://www.rand.org/blog/2020/06/the-risks-of-autonomous-weapons-systems-for-crisis.html">increasing the risk of preemptive attacks</a>, and because they could become <a href="https://foreignpolicy.com/2020/10/14/ai-drones-swarms-killer-robots-partial-ban-on-autonomous-weapons-would-make-everyone-safer/">combined with chemical, biological, radiological and nuclear weapons</a> themselves. </p>
<p>As a <a href="https://scholar.google.com/citations?user=92kUNgwAAAAJ&hl=en&oi=sra">specialist in human rights</a> with a focus on the <a href="https://muse.jhu.edu/article/761349#bio_wrap">weaponization of artificial intelligence</a>, I find that autonomous weapons make the unsteady balances and fragmented safeguards of the nuclear world – for example, the U.S. president’s minimally constrained <a href="https://wwnorton.com/books/thermonuclear-monarchy/">authority to launch a strike</a> – more unsteady and more fragmented.</p>
<h2>Lethal errors and black boxes</h2>
<p>I see four primary dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat? </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/fPqmC16ewYg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Killer robots, like the drones in the 2017 short film ‘Slaughterbots,’ have long been a major subgenre of science fiction. (Warning: graphic depictions of violence.)</span></figcaption>
</figure>
<p>The problem here is not that machines will make such errors and humans won’t. It’s that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent <a href="https://www.reuters.com/world/asia-pacific/us-military-says-10-civilians-killed-kabul-drone-strike-last-month-2021-09-17/">U.S. drone strike in Afghanistan</a> seem like mere rounding errors by comparison.</p>
<p>Autonomous weapons expert Paul Scharre uses the metaphor of <a href="https://wwnorton.com/books/Army-of-None/">the runaway gun</a> to explain the difference. A runaway gun is a defective machine gun that continues to fire after a trigger is released. The gun continues to fire until ammunition is depleted because, so to speak, the gun does not know it is making an error. Runaway guns are extremely dangerous, but fortunately they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Autonomous weapons, by definition, have no such safeguard. </p>
<p>Importantly, weaponized AI need not even be defective to produce the runaway gun effect. As multiple studies on algorithmic errors across industries have shown, the very best algorithms – operating as designed – can <a href="https://brianchristian.org/the-alignment-problem/">generate internally correct outcomes that nonetheless spread terrible errors</a> rapidly across populations. </p>
<p>For example, a neural net designed for use in Pittsburgh hospitals identified <a href="https://www.pulmonologyadvisor.com/home/topics/practice-management/the-potential-pitfalls-of-machine-learning-algorithms-in-medicine/">asthma as a risk-reducer</a> in pneumonia cases; image recognition software used by Google <a href="https://www.usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-black-people-as-gorillas/29567465/">identified African Americans as gorillas</a>; and a machine-learning tool used by Amazon to rank job candidates <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G">systematically assigned negative scores to women</a>.</p>
<p>The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them. The <a href="https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf">black box problem</a> of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems. </p>
<h2>The proliferation problems</h2>
<p>The next two dangers are the problems of low-end and high-end proliferation. Let’s start with the low end. The militaries developing autonomous weapons now are proceeding on the assumption that they will be able to <a href="https://www.popularmechanics.com/military/research/a23133118/us-ai-robots-warfare/">contain and control the use of autonomous weapons</a>. But if the history of weapons technology has taught the world anything, it’s this: Weapons spread. </p>
<p>Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the <a href="https://www.npr.org/templates/story/story.php?storyId=6539945">Kalashnikov assault rifle</a>: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Front view of a quadcopter showing its camera" src="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=289&fit=crop&dpr=1 600w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=289&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=289&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=364&fit=crop&dpr=1 754w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=364&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=364&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:STM_Kargu.png">Ministry of Defense of Ukraine</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>High-end proliferation is just as bad, however. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of <a href="https://cpr.unu.edu/publications/articles/ai-global-governance-ai-and-nuclear-weapons-promise-and-perils-of-ai-for-nuclear-stability.html">mounting chemical, biological, radiological and nuclear arms</a>. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use.</p>
<p>High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. The weapons are likely to be equipped with expensive <a href="https://smartech.gatech.edu/bitstream/handle/1853/31465/09-02.pdf">ethical governors</a> designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the <a href="https://news.un.org/en/story/2020/07/1068041">“myth of a surgical strike”</a> to quell moral protests. Autonomous weapons will also reduce both the need for and risk to one’s own soldiers, dramatically altering the <a href="https://www.jstor.org/stable/3312365?seq=1#metadata_info_tab_contents">cost-benefit analysis</a> that nations undergo while launching and maintaining wars. </p>
<p>Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. Think about the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the <a href="https://dx.doi.org/10.2139/ssrn.3804885">blowback</a> experienced around the world today. Multiply that by every country currently aiming for high-end autonomous weapons. </p>
<h2>Undermining the laws of war</h2>
<p>Finally, autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties reaching as far back as the 1864 <a href="https://www.law.cornell.edu/wex/geneva_conventions_and_their_additional_protocols">Geneva Convention</a>, are the international thin blue line separating war with honor from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. A prominent example of someone held to account is <a href="https://www.britannica.com/biography/Slobodan-Milosevic">Slobodan Milosevic</a>, former president of the Federal Republic of Yugoslavia, who was indicted on charges against humanity and war crimes by the U.N.’s International Criminal Tribunal for the Former Yugoslavia.</p>
<p>But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious <a href="https://www.hrw.org/news/2020/06/01/need-and-elements-new-treaty-fully-autonomous-weapons#">accountability gap.</a> </p>
<p>To hold a soldier <a href="https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=1011&context=djilp">criminally responsible</a> for deploying an autonomous weapon that commits war crimes, prosecutors would need to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult as a matter of law, and possibly unjust as a matter of morality, given that autonomous weapons are inherently unpredictable. I believe the distance separating the soldier from the independent decisions made by autonomous weapons in rapidly evolving environments is simply too great. </p>
<p>The legal and moral challenge is not made easier by shifting the blame up the chain of command or back to the site of production. In a world without regulations that mandate <a href="https://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/">meaningful human control</a> of autonomous weapons, there will be war crimes with no war criminals to hold accountable. The structure of the laws of war, along with their deterrent value, will be significantly weakened.</p>
<h2>A new global arms race</h2>
<p>Imagine a world in which militaries, insurgent groups and international and domestic terrorists can deploy theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal accountability. It is a world where the sort of unavoidable <a href="https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815">algorithmic errors</a> that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.</p>
<p>In my view, the world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.</p>
<p>[<em>Get our best science, health and technology stories.</em> <a href="https://theconversation.com/us/newsletters/science-editors-picks-71/?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=science-best">Sign up for The Conversation’s science newsletter</a>.]</p><img src="https://counter.theconversation.com/content/168049/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James Dawes does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Sci-fi nightmares of a robot apocalypse aside, autonomous weapons are a very real threat to humanity. An expert on the weapons explains how the emerging arms race could be humanity’s last.James Dawes, Professor of English, Macalester CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1264832019-12-04T13:27:09Z2019-12-04T13:27:09ZRobotics researchers have a duty to prevent autonomous weapons<figure><img src="https://images.theconversation.com/files/304560/original/file-20191201-156120-1g0lx17.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Both the hardware and software of commercial drones can be changed easily.</span> <span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Drones-Regulations/a4f3ca06278c49f58504551dadaf0faf/6/0">AP Photo/Seth Wenig</a></span></figcaption></figure><p>Robotics is rapidly being transformed by advances in artificial intelligence. And the benefits are widespread: We are seeing safer vehicles with the <a href="https://www.subaru.com/engineering/eyesight.html">ability to automatically brake in an emergency</a>, robotic arms <a href="https://www.vox.com/2017/5/26/15656120/manufacturing-jobs-automation-ai-us-increase-robot-sales-reshoring-offshoring">transforming factory lines that were once offshored</a> and <a href="https://www.starship.xyz/">new robots</a> that can do everything from shop for groceries to <a href="https://www.wired.com/story/postmates-delivery-robot-serve/">deliver prescription drugs</a> to people who have trouble doing it themselves.</p>
<p>But our ever-growing appetite for intelligent, autonomous machines poses a host of ethical challenges.</p>
<h2>Rapid advances have led ethical dilemmas</h2>
<p>These ideas and more were swirling as my colleagues and <a href="https://scholar.google.com/citations?user=-YOtPcIAAAAJ&hl=en">I</a> met in early November at one of the world’s largest autonomous robotics-focused research conferences – <a href="https://www.ieee-ras.org/about-ras/ras-calendar/event/1141-iros-2019-international-conference-on-intelligent-robots-and-systems">the IEEE International Conference on Intelligent Robots and Systems</a>. There, academics, corporate researchers, and government scientists presented developments in algorithms that allow robots to make their own decisions.</p>
<p>As with all technology, the range of future uses for our research is difficult to imagine. It’s even more challenging to forecast given how quickly this field is changing. Take, for example, the ability for a computer to identify objects in an image: in 2010, the state of the art was successful <a href="https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf">only about half of the time</a>, and it was stuck there for years. Today, though, the best algorithms as shown in published papers <a href="https://paperswithcode.com/sota/image-classification-on-imagenet">are now at 86% accuracy</a>. That advance alone allows autonomous robots to understand what they are seeing through the camera lenses. It also shows the rapid pace of progress over the past decade due to developments in AI.</p>
<p>This kind of improvement is a true milestone from a technical perspective. Whereas in the past manually reviewing troves of video footage would require an incredible number of hours, now such data can be rapidly and accurately parsed by a computer program.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=393&fit=crop&dpr=1 600w, https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=393&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=393&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=494&fit=crop&dpr=1 754w, https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=494&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=494&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">San Francisco became the first U.S. city to ban the use of facial recognition technology by police and other city agencies. This same technology can be coupled with drones, which are becoming more autonomous.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Facial-Recognition-Backlash/5d8a15313554488986c7eb5c2401c9d7/9/0">AP Photo/Eric Risberg</a></span>
</figcaption>
</figure>
<p>But it also gives rise to an ethical dilemma. In removing humans from the process, the assumptions that underpin the decisions <a href="https://www.wilsoncenter.org/sites/default/files/ai_and_privacy.pdf">related to privacy and security have been fundamentally altered</a>. For example, the use of cameras in public streets may have raised privacy concerns 15 or 20 years ago, but adding accurate facial recognition technology dramatically alters those privacy implications.</p>
<h2>Easy to modify systems</h2>
<p>When developing machines that can make own decisions – typically called autonomous systems – the ethical questions that arise are arguably more concerning than those in object recognition. AI-enhanced autonomy is developing so rapidly that capabilities which were once limited to highly engineered systems are now available to anyone with a household toolbox and some computer experience. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=395&fit=crop&dpr=1 600w, https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=395&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=395&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=496&fit=crop&dpr=1 754w, https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=496&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=496&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Commercial drones allow for many beneficial uses, such as delivering medicine or spraying for mosquitoes.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Zanzibar-Drones-Fight-Malaria/84a8645acf2a4b78bfd017e683e048a2/5/0">AP Photo/Haroub Hussein</a></span>
</figcaption>
</figure>
<p>People with no background in computer science can <a href="https://www.kaggle.com/learn/overview">learn some of the most state-of-the-art artificial intelligence tools</a>, and robots are more than willing to let you <a href="https://developer.dji.com/onboard-sdk/documentation/sample-doc/advanced-sensing-object-detection.html">run your newly acquired machine learning techniques</a> on them. There are online forums filled with people <a href="https://www.quora.com/What-is-the-best-way-to-understand-the-basics-of-robotics">eager to help anyone learn how to do this</a>.</p>
<p>With earlier tools, it was already easy enough to program your minimally modified drone <a href="https://www.instructables.com/id/Vision-Based-Object-Tracking-and-Following-Using-3/">to identify a red bag and follow it</a>. <a href="http://openaccess.thecvf.com/content_cvpr_2017/html/Redmon_YOLO9000_Better_Faster_CVPR_2017_paper.html">More recent object detection technology</a> unlocks the ability to track a range of things that resemble more than 9,000 different object types. Combined with <a href="https://spectrum.ieee.org/automaton/robotics/drones/skydios-new-drone-is-smaller-even-smarter-and-almost-affordable">newer, more maneuverable drones</a>, it’s not hard to imagine how easily they could be equipped with weapons. What’s to stop someone from strapping an explosive or another weapon to a drone equipped with this technology? </p>
<p>Using a variety of techniques, autonomous drones are already a threat. They have been caught <a href="https://www.washingtonpost.com/news/checkpoint/wp/2017/06/14/isis-drones-are-attacking-u-s-troops-and-disrupting-airstrikes-in-raqqa-officials-say/">dropping explosives on U.S. troops</a>, <a href="https://techcrunch.com/2019/08/29/climate-activists-plan-to-use-drones-to-shut-down-heathrow-airport-next-month/">shutting down airports</a> and <a href="https://www.nytimes.com/2018/08/10/world/americas/venezuela-video-analysis.html">being used in an assassination attempt on Venezuelan leader Nicolas Maduro</a>. The autonomous systems that are being developed right now could make staging such attacks easier and more devastating.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/gEnG2tv5LJM?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Reports indicate that the Islamic State is using off-the-shelf drones, some of which are being used for bombings.</span></figcaption>
</figure>
<h2>Regulation or review boards?</h2>
<p>About a year ago, a group of researchers in artificial intelligence and autonomous robotics <a href="https://futureoflife.org/lethal-autonomous-weapons-pledge/">put forward a pledge</a> to refrain from developing lethal autonomous weapons. They defined lethal autonomous weapons as platforms that are capable of “selecting and engaging targets without human intervention.” As a robotics researcher who isn’t interested in developing autonomous targeting techniques, <a href="https://www.colorado.edu/irt/autonomous-systems/2018/08/08/cu-engineering-faculty-respond-lethal-autonomous-weapons-pledge">I felt that the pledge missed the crux of the danger</a>. It glossed over important ethical questions that need to be addressed, especially those at the broad intersection of drone applications that could be either benign or violent.</p>
<p>For one, the researchers, companies and developers who wrote the papers and built the software and devices generally aren’t doing it to create weapons. However, they might inadvertently enable others, with minimal expertise, to create such weapons. </p>
<p>What can we do to address this risk?</p>
<p>Regulation is one option, and one already used by banning aerial drones near airports or around national parks. Those are helpful, but they don’t prevent the creation of weaponized drones. Traditional weapons regulations are not a sufficient template, either. They generally tighten controls on the source material or the manufacturing process. That would be nearly impossible with autonomous systems, where the source materials are widely shared computer code and the manufacturing process can take place at home using off-the-shelf components. </p>
<p>Another option would be to follow in the footsteps of biologists. In 1975, they held <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC432675/">a conference on the potential hazards of recombinant DNA</a> at Asilomar in California. There, experts agreed to voluntary guidelines that would direct the course of future work. For autonomous systems, such an outcome seems unlikely at this point. Many research projects that could be used in the development of weapons also have peaceful and incredibly useful outcomes.</p>
<p>A third choice would be to establish self-governance bodies at the organization level, such as the <a href="https://www.fda.gov/regulatory-information/search-fda-guidance-documents/institutional-review-boards-frequently-asked-questions">institutional review boards</a> that currently oversee studies on human subjects at companies, universities and government labs. These boards consider the benefits to the populations involved in the research and craft ways to mitigate potential harms. But they can regulate only research done within their institutions, which limits their scope. </p>
<p>Still, a large number of researchers would fall under these boards’ purview – within the autonomous robotics research community, nearly every presenter at technical conferences are members of an institution. Research review boards would be a first step toward self-regulation and could flag projects that could be weaponized.</p>
<h2>Living with the peril and promise</h2>
<p>Many of my colleagues and I are excited to develop the next generation of autonomous systems. I feel that the potential for good is too promising to ignore. But I am also concerned about the risks that new technologies pose, especially if they are exploited by malicious people. Yet with some careful organization and informed conversations today, I believe we can work toward achieving those benefits while limiting the potential for harm.</p>
<p>[<em><a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=youresmart">You can read us daily by subscribing to our newsletter</a>.</em>]</p><img src="https://counter.theconversation.com/content/126483/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Christoffer Heckman receives funding from the Defense Advanced Research Projects Agency and the National Science Foundation.</span></em></p>Modified commercial drones are getting more powerful and can easily be turned into weapons. A researcher argues for ways to prevent their development.Christoffer Heckman, Assistant Professor of Computer Science, University of Colorado BoulderLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1139412019-03-27T12:28:25Z2019-03-27T12:28:25ZKiller robots already exist, and they’ve been here a very long time<figure><img src="https://images.theconversation.com/files/265890/original/file-20190326-36270-hurjwk.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5600%2C3150&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/3d-render-very-detailed-robot-army-1197070471">Mykola Holyutyak/Shutterstock</a></span></figcaption></figure><p>Humans will always make the final decision on whether armed robots can shoot, <a href="https://www.defenseone.com/technology/2019/03/us-military-changing-killing-machine-robo-tank-program-after-controversy/155256/">according to a statement</a> by the US Department of Defense. Their clarification comes amid fears about a new advanced targeting system, known as <a href="https://www.fbo.gov/index.php?s=opportunity&mode=form&id=29a4aed941e7e87b7af89c46b165a091&tab=core&_cview=0">ATLAS</a>, that will use artificial intelligence in combat vehicles to target and execute threats. While the public may feel uneasy about <a href="https://www.bbc.co.uk/news/technology-47524768">so-called “killer robots”</a>, the concept is nothing new – <a href="https://www.wired.com/2007/08/httpwwwnational/">machine-gun wielding “SWORDS” robots</a> were deployed in Iraq as early as 2007.</p>
<p>Our relationship with military robots goes back even further than that. This is because when people say “robot”, they can mean any technology with some form of “autonomous” element that allows it to perform a task without the need for direct human intervention.</p>
<p>These technologies have existed for a very long time. During World War II, the <a href="https://en.wikipedia.org/wiki/Proximity_fuze">proximity fuse</a> was developed to explode artillery shells at a predetermined distance from their target. This made the shells far more effective than they would otherwise have been by augmenting human decision making and, in some cases, taking the human out of the loop completely.</p>
<p>So the question is not so much whether we should use autonomous weapon systems in battle – we already use them, and they take many forms. Rather, we should focus on how we use them, why we use them, and what form – if any – human intervention should take.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=371&fit=crop&dpr=1 600w, https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=371&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=371&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=466&fit=crop&dpr=1 754w, https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=466&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=466&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Autonomous targeting systems originated with innovations in anti-aircraft weaponry during World War II.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/antiaircraft-cannon-military-silhouettes-fighting-scene-1308991861">Zef Art/Shutterstock</a></span>
</figcaption>
</figure>
<h2>The birth of cybernetics</h2>
<p>My research explores the philosophy of human-machine relations, with a particular focus on military ethics, and the way we distinguish between humans and machines. During World War II, mathematician Norbert Wiener laid the groundwork of <a href="https://www.pangaro.com/definition-cybernetics.html">cybernetics</a> – the study of the interface between humans, animals and machines – in his work on the control of anti-aircraft fire. By studying the deviations between an aircraft’s predicted motion, and its actual motion, Wiener and his colleague Julian Bigelow came up with the concept of the “feedback loop”, where deviations could be fed back into the system in order to correct further predictions.</p>
<p>Wiener’s theory therefore went far beyond mere augmentation, for cybernetic technology could be used to pre-empt human decisions – removing the fallible human from the loop, in order to make better, quicker decisions and make weapons systems more effective.</p>
<p>In the years since World War II, the computer has emerged to sit alongside cybernetic theory to form a central pillar of military thinking, from the laser-guided “smart bombs” of the Vietnam era to cruise missiles and Reaper drones.</p>
<p>It’s no longer enough to merely augment the human warrior as it was in the early days. The next phase is to remove the human completely – “maximising” military outcomes while minimising the political cost associated with the loss of allied lives. This has led to the <a href="https://www.nytimes.com/roomfordebate/2016/01/12/reflecting-on-obamas-presidency/obamas-embrace-of-drone-strikes-will-be-a-lasting-legacy">widespread use of military drones</a> by the US and its allies. While these missions are highly controversial, in political terms they have proved to be preferable by far to the public outcry caused by military deaths.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A modern military drone.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/combat-drone-fly-blue-sky-above-1302944902">Alex LMX/Shutterstock</a></span>
</figcaption>
</figure>
<h2>The human machine</h2>
<p>One of the most contentious issues relating to drone warfare is the role of the drone pilot or “operator”. Like all personnel, these operators are bound by their employers to “do a good job”. However, the terms of success are far from clear. As philosopher and cultural critic Laurie Calhoun observes:</p>
<blockquote>
<p>The business of UCAV [drone] operators is to kill.</p>
</blockquote>
<p>In this way, their task is not so much to make a human decision, but rather to do the job that they are employed to do. If the computer tells them to kill, is there really any reason why they shouldn’t?</p>
<p>A similar argument can be made with respect to the modern soldier. From GPS navigation to video uplinks, soldiers carry numerous devices that tie them into a vast network that monitors and controls them at every turn.</p>
<p>This leads to an ethical conundrum. If the purpose of the soldier is to follow orders to the letter – with cameras used to ensure compliance – then why do we bother with human soldiers at all? After all, machines are far more efficient than human beings and don’t suffer from fatigue and stress in the same way as a human does. If soldiers are expected to behave in a programmatic, robotic fashion anyway, then what’s the point in shedding unnecessary allied blood?</p>
<p>The answer, here, is that the human serves as an alibi or form of “ethical cover” for what is in reality, an almost wholly mechanical, robotic act. Just as the drone operator’s job is to oversee the computer-controlled drone, so the human’s role in the Department of Defense’s new ATLAS system is merely to act as ethical cover in case things go wrong.</p>
<p>While Predator and Reaper drones may stand at the forefront of the public imagination about military autonomy and “killer robots”, these innovations are in themselves nothing new. They are merely the latest in a long line of developments that go back many decades.</p>
<p>While it may comfort some readers to imagine that machine autonomy will always be subordinate to human decision making, this really does miss the point. Autonomous systems have long been embedded in the military and we should prepare ourselves for the consequences.</p><img src="https://counter.theconversation.com/content/113941/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mike Ryder does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Science fiction has made us vigilant of ‘killer robots’ in our midst, but they’re far closer than many of us realise.Mike Ryder, Associate Lecturer in Philosophy, Lancaster UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1026372018-09-12T02:51:16Z2018-09-12T02:51:16ZWhy it’s so hard to reach an international agreement on killer robots<figure><img src="https://images.theconversation.com/files/235713/original/file-20180911-123113-d66rcu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The MK 15 Phalanx close-in weapons system, on the USS Reuben James guided-missile frigate, fires during an exercise. </span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/compacflt/7066133355/">Flickr/US Pacific Fleet</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>For several years, civil society groups have been <a href="https://www.stopkillerrobots.org/">calling for a ban</a> on what they call “killer robots”. Scores of technologists have <a href="https://futureoflife.org/open-letter-autonomous-weapons/">lent their voice</a> to the cause. Some two dozen governments now <a href="https://www.stopkillerrobots.org/wp-content/uploads/2018/04/KRC_CountryViews_13Apr2018.pdf">support a ban</a> and several others would like to see some kind of international regulation. </p>
<p>Yet the latest talks on “lethal autonomous weapons systems” wrapped up last month with no agreement on a ban. The <a href="https://www.unog.ch/80256EE600585943/(httpPages)/7C335E71DFCB29D1C1258243003E8724?OpenDocument">Group of Governmental Experts</a> meeting, convened in Geneva under the auspices of the United Nations Convention on Certain Conventional Weapons, did not even clearly proceed towards one. The outcome was a decision to continue discussions next year. </p>
<p>Those supporting a ban are <a href="https://www.hrw.org/news/2018/09/05/support-grows-killer-robots-ban">not impressed</a>. But the reasons for the failure to reach agreement on the way forward are complex. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lack-of-technical-knowledge-in-leadership-is-a-key-reason-why-so-many-it-projects-fail-101889">Lack of technical knowledge in leadership is a key reason why so many IT projects fail</a>
</strong>
</em>
</p>
<hr>
<h2>What to ban?</h2>
<p>The immediate difficulty concerns articulating what technology is objectionable. The related, deeper question is about whether increased autonomy of weapons is always bad.</p>
<p>Many governments, including <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/2440CD1922B86091C12582720057898F/$file/2018_LAWS6a_Germany.pdf">Germany</a>, <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/072ED40378F79CFBC125827200575723/$file/2018_LAWSGeneralExchange_Spain.pdf">Spain</a> and the <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/050CF806D90934F5C12582E5002EB800/$file/2018_GGE+LAWS_August_Working+Paper_UK.pdf">United Kingdom</a>, have said they do not have, and do not want, weapons wholly uncontrolled by humans. At the same time, militaries already own weapons that, to some degree, function without someone pulling the trigger.</p>
<p>Since the 1970s, navies have used so-called close-in weapon systems (CWIS). Once switched on, these weapons can automatically shoot down incoming rockets and missiles as the warship’s final line of defence. <a href="https://www.raytheon.com/capabilities/products/phalanx">Phalanx</a>, with its distinctively shaped radar dome, is probably the best-known weapon system of this kind.</p>
<p>Armies now deploy land-based variants of CWIS, generally known as C-RAM (short for counter-rocket, artillery and mortar), for the protection of military bases. </p>
<p>Other types of weapons also have autonomous functionality. For example, <a href="https://www.baesystems.com/en-us/product/155-bonus">sensor-fuzed weapons</a>, fired in the general direction of their targets, rely on sensors and preset targeting parameters to launch themselves at individual targets. </p>
<p>None of these weapons has stirred significant controversy. </p>
<h2>The acceptable vs the unacceptable</h2>
<p>What exactly is the dreaded “fully autonomous” weapon system that no-one has much appetite for? Attempts to answer this question over the past few years have not enjoyed success. </p>
<p>The supporters of a ban note – correctly – that the lack of a precise definition has not stopped arms control negotiations before. They point to the <a href="https://www.un.org/disarmament/ccm/">Convention on Cluster Munitions</a>, signed in 2008, as an example.</p>
<p>The notion of a cluster munition – a large bomb that disperses small unguided bomblets – was clear enough from the outset. Yet the precise properties of the banned munition were agreed upon later in the process. </p>
<p>Unfortunately, the comparison between cluster munitions and autonomous weapons does not quite work. Though cluster munitions were a loose category to start, it was clear they could be categorised by technical criteria. </p>
<p>In the end, the Convention on Cluster Munitions <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/E6D340011E720FC9C1257516005818B8/$file/Convention+on+Cluster+Munitions+E.pdf">draws a line</a> between permissible and prohibited munitions by reference to things such as the number, weight and self-destruction capability of submunitions. </p>
<p>With regard to any similar rules on autonomous weapon systems, it is not only unclear where the line should to be drawn between what is and isn’t permissible, it is also unclear what criteria to use for drawing it.</p>
<h2>How much human control?</h2>
<p>One way out of this thicket of definitions is to shift the focus from the weapon itself to the way the human interacts with the weapon. Rather than debate what to ban, governments should agree on the necessary degree of control humans should exercise. <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/3BDD5F681113EECEC12582FE0038B22F/$file/2018_GGE+LAWS_August_Working+paper_Austria_Brazil_Chile.pdf">Austria, Brazil and Chile</a> have suggested starting treaty negotiations precisely along those lines.</p>
<p>This change of perspective may well prove to be helpful. But the key problem is thereby transformed rather than resolved. The question now becomes: what kind of human involvement is needed and when must it occur?</p>
<p>A strict idea of human control would entail a human making a conscious decision about each individual target in real time. This approach would cast a shadow on the existing weapon systems mentioned earlier. </p>
<p>A strict reading of human control might also require the operator to have the ability to abort a weapon until the moment it hits a target. This would raise questions about even the simplest of weapons – rocks, spears, bullets or gravity bombs – which leave human hands at some point. </p>
<p>An alternative understanding of human control would consider the weapon’s broader design, testing, acquisition and deployment processes. It would admit, for example, that a weapon preprogrammed by a human is in fact controlled by a human. But some would consider programming to be a poor and unpalatable substitute for a human acting at the critical time. </p>
<p>In short, the furious agreement about the need to maintain human involvement hides a deep disagreement about what that means. This is not a mere semantic dispute. It is an important and substantive disagreement that defies an easy resolution.</p>
<h2>The benefits of autonomy</h2>
<p>Some governments, such as <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/7C177AE5BC10B588C125825F004B06BE/$file/CCW_GGE.1_2018_WP.4.pdf">the United States</a>, argue that autonomous functions in weapons can yield military and humanitarian benefits. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/three-ways-robots-can-save-lives-in-war-87105">Three ways robots can save lives in war</a>
</strong>
</em>
</p>
<hr>
<p>They suggest, for example, that reducing the manual control that a human has over a weapon, might increase its accuracy. This, in turn, could help avoid unintended harm to civilians.</p>
<p>Others find even the notion of benefits in this context to be too much. During the last Group of Governmental Experts meeting, several Latin American governments, most prominently <a href="http://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2018/gge/reports/CCWR6.11.pdf">Costa Rica and Cuba</a>, opposed any reference to potential benefits. In their view, autonomy in weapon systems only poses risks and challenges, which need to be mitigated through further regulation.</p>
<p>This divide reveals an underlying uncertainty about the aims of international law in armed conflict. For some, desirable outcomes – surgical use of force, reduced collateral damage, and so on – prevail. For others, the instruments of warfare must (sometimes) be restricted no matter the outcomes.</p>
<h2>The next step</h2>
<p>Supporters of the ban <a href="https://www.nytimes.com/aponline/2018/09/03/world/europe/ap-eu-united-nations-killer-robots.html">suggest</a> that a handful of powerful states, particularly the US and Russia, are blocking further negotiations.</p>
<p>This does not seem entirely accurate. Disagreements about the most appropriate way forward are much broader and quite fundamental. </p>
<p>Addressing the challenges of autonomous weapons is therefore not just a matter of getting a few recalcitrant governments to fall in line. Much less is it about verbally abusing them into submission. </p>
<p>If there is to be further regulation, and if that regulation is to be effective, the different viewpoints must be taken seriously – even if one disagrees with them. A quick fix is unlikely and, in the long term, probably counterproductive.</p><img src="https://counter.theconversation.com/content/102637/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rain Liivoja currently holds a Branco Weiss Fellowship, administered by ETH Zurich. He has served as an expert on the Estonian delegation to the Group of Governmental Experts on Lethal Autonomous Weapons Systems. This article reflects his personal views.</span></em></p>We already have some autonomous weapons – so talk of any ban should focus on where we draw the line on what is acceptable, and what is not. Can we at least agree on that?Rain Liivoja, Associate Professor, TC Beirne School of Law, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1027362018-09-06T13:19:17Z2018-09-06T13:19:17ZAI has already been weaponised – and it shows why we should ban ‘killer robots’<figure><img src="https://images.theconversation.com/files/235215/original/file-20180906-190636-aogrro.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/unmanned-air-uav-spy-above-enemy-26952160?src=-ZOKXFCzFXCQZUjYk5R16g-1-16">Oleg Yarko/Shutterstock</a></span></figcaption></figure><p>A dividing line is emerging in the debate over so-called killer robots. Many countries want to see new international law on autonomous weapon systems that can target and kill people without human intervention. But those countries already developing such weapons are instead trying to highlight their supposed benefits.</p>
<p>I witnessed this growing gulf at a recent UN meeting of more than 70 countries <a href="https://www.unog.ch/80256EE600585943/(httpPages)/7C335E71DFCB29D1C1258243003E8724?OpenDocument">in Geneva</a>, where those in favour of autonomous weapons, including the US, Australia and South Korea, were more vocal than ever. At the meeting, <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/D1A2BA4B7B71D29FC12582F6004386EF/$file/2018_GGE+LAWS_August_Working+Paper_US.pdf">the US claimed</a> that such weapons could actually make it easier to follow international humanitarian law by making military action more precise.</p>
<p>Yet it’s highly speculative to say that “killer robots” will ever be able to follow humanitarian law at all. And while politicians continue to argue about this, the spread of autonomy and artificial intelligence in existing military technology is already effectively <a href="https://www.cambridge.org/core/journals/review-of-international-studies/article/autonomous-weapons-systems-and-changing-norms-in-international-relations/8E8CC29419AF2EF403EA02ACACFCF223">setting undesirable standards</a> for its role in the use of force.</p>
<p>A series of <a href="https://futureoflife.org/open-letter-autonomous-weapons/">open letters</a> by prominent researchers speaking out against weaponising artificial intelligence have helped bring the debate about autonomous military systems to public attention. The problem is that the debate is framed as if this technology is something from the future. In fact, the questions it raises are effectively already being addressed by existing systems.</p>
<p>Most air defence systems <a href="https://www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf">already have</a> significant autonomy in the targeting process, and military aircraft have highly automated features. This means “robots” are already involved in identifying and engaging targets.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Humans still press the trigger, but for how long?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/female-military-drone-operator-wide-shot-539931541?src=eQqZybPxaHhkvow-YSqfIA-1-1">Burlingham/Shutterstock</a></span>
</figcaption>
</figure>
<p>Meanwhile, another important question raised by current technology is missing from the ongoing discussion. Remotely operated drones are currently used by several countries’ militaries to drop bombs on targets. But we know from incidents <a href="https://www.law.columbia.edu/sites/default/files/microsites/human-rights-institute/files/The%20Civilian%20Impact%20of%20Drones.pdf">in Afghanistan and elsewhere</a> that drone images aren’t enough to clearly distinguish between civilians and combatants. We also know that current AI technology can contain significant bias that effects its decision making, often with <a href="http://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai/">harmful effects</a>. </p>
<p>As future fully autonomous aircraft are likely to be used in similar ways to drones, they will probably follow the practices laid out by drones. Yet states using existing autonomous technologies are excluding them from the wider debate by referring to them as “semi-autonomous” or so-called “legacy systems”. Again, this makes the issue of “killer robots” seem more futuristic than it really is. This also prevents the international community from taking a closer look at whether these systems are fundamentally appropriate under humanitarian law.</p>
<p>Several key principles of international humanitarian law require deliberate human judgements that machines <a href="https://thebulletin.org/landing_article/why-the-world-needs-to-regulate-autonomous-weapons-and-soon/">are incapable of</a>. For example, the legal definition of who is a civilian and who is a combatant isn’t written in a way that could be programmed into AI, and <a href="https://www.tandfonline.com/doi/abs/10.1080/15027570.2010.537903">machines lack</a> the situational awareness and ability to infer things necessary to make this decision.</p>
<h2>Invisible decision making</h2>
<p>More profoundly, the more that targets are chosen and potentially attacked by machines, the less we know about how those decisions are made. Drones <a href="https://www.theguardian.com/science/the-lay-scientist/2016/feb/18/has-a-rampaging-ai-algorithm-really-killed-thousands-in-pakistan">already rely heavily</a> on intelligence data processed by “black box” algorithms that are very difficult to understand to choose their proposed targets. This <a href="http://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/">makes it harder</a> for the human operators who actually press the trigger to question target proposals.</p>
<p>As the UN continues to debate this issue, it’s worth noting that most countries in favour of banning autonomous weapons are developing countries, which are typically <a href="http://www.article36.org/wp-content/uploads/2016/04/A36-Disarm-Dev-Marginalisation.pdf">less likely</a> to attend international disarmament talks. So the fact that they are willing to speak out strongly against autonomous weapons makes their doing so all the more significant. Their history of experiencing interventions and invasions from richer, more powerful countries (such as some of the ones in favour of autonomous weapons) also reminds us that they are most at risk from this technology.</p>
<p>Given what we know about existing autonomous systems, we should be very concerned that “killer robots” will make breaches of humanitarian law more, not less, likely. This threat can only be prevented by negotiating new international law curbing their use.</p><img src="https://counter.theconversation.com/content/102736/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ingvild Bode receives funding from the Joseph Rowntree Charitable Trust. </span></em></p>The debate on autonomous weapons isn’t paying enough attention to the technology already in use.Ingvild Bode, Senior Lecturer in International Relations, University of KentLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1014272018-08-21T10:32:05Z2018-08-21T10:32:05ZBan ‘killer robots’ to protect fundamental moral and legal principles<figure><img src="https://images.theconversation.com/files/232107/original/file-20180815-2909-5xtnkd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The U.S. military is already testing a Modular Advanced Armed Robotic System.</span> <span class="attribution"><a class="source" href="https://www.marforpac.marines.mil/Exercises/RIMPAC/RIMPAC-Photos/igphoto/2001572635/">Lance Cpl. Julien Rodarte, U.S. Marine Corps</a></span></figcaption></figure><p>When drafting a <a href="https://www.britannica.com/event/Hague-Conventions">treaty on the laws of war</a> at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language. </p>
<p>This standard, known as the <a href="https://www.icrc.org/eng/resources/documents/article/other/57jnhy.htm">Martens Clause</a>, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”</p>
<p>I was the lead author of a <a href="https://www.hrw.org/node/321376">new report</a> by <a href="https://www.hrw.org/">Human Rights Watch</a> and the <a href="http://hrp.law.harvard.edu/">Harvard Law School International Human Rights Clinic</a> that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these <a href="https://theconversation.com/losing-control-the-dangers-of-killer-robots-58262">weapons</a>.</p>
<p>Representatives of more than 70 nations will gather from August 27 to 31 at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the <a href="https://www.unog.ch/80256EE600585943/(httpPages)/4F0DEF093B4860B4C1257180004B1B30?OpenDocument">Convention on Conventional Weapons</a>, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.</p>
<h2>Making rules for the unknowable</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=712&fit=crop&dpr=1 600w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=712&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=712&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=895&fit=crop&dpr=1 754w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=895&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=895&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Russian diplomat Fyodor Fyodorovich Martens, for whom the Martens Clause is named.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Friedrich_Fromhold_Martens_(1845-1909).jpg">Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.</p>
<p>Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are <a href="https://www.stopkillerrobots.org/wp-content/uploads/2018/03/KRC_Briefing_CCWApr2018.pdf">all working to develop</a> them. They argue that the technology would process information faster and keep soldiers off the battlefield.</p>
<p>The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.</p>
<h2>Principles of humanity</h2>
<p>The history of the Martens Clause shows that it is a fundamental principle of international humanitarian law. Originating in the <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=9FE084CDAC63D10FC12563CD00515C4D">1899 Hague Convention</a>, versions of it appear in all four <a href="https://www.icrc.org/eng/assets/files/publications/icrc-002-0173.pdf#page=83">Geneva Conventions</a> and <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=6C86520D7EFAD527C12563CD0051D63C">Additional Protocol I</a>. It is cited in <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=056FD614A7D05D90C12563CD0051EC75">numerous</a> <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=CB3CAB98FF67D28EC12574C60038D63C">disarmament</a> <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=6D8BF0E4ABD74D62C125825D004955B1">treaties</a>. In 1995, concerns under the Martens Clause motivated countries to adopt a <a href="https://ihl-databases.icrc.org/ihl/INTRO/570">preemptive ban on blinding lasers</a>. </p>
<p>The principles of humanity require humane treatment of others and respect for human life and dignity. Fully autonomous weapons could not meet these requirements because they would be unable to feel compassion, an emotion that inspires people to minimize suffering and death. The weapons would also lack the legal and ethical judgment necessary to ensure that they protect civilians in complex and unpredictable conflict situations.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=436&fit=crop&dpr=1 600w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=436&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=436&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=548&fit=crop&dpr=1 754w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=548&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=548&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Under human supervision – for now.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Marine_Corps_Warfighting_Laboratory_MAGTAF_Integrated_Experiment_(MCWL)_160709-M-OB268-165.jpg">Pfc. Rhita Daniel, U.S. Marine Corps</a></span>
</figcaption>
</figure>
<p>In addition, as inanimate machines, these weapons could not truly understand the value of an individual life or the significance of its loss. Their algorithms would translate human lives into numerical values. By making lethal decisions based on such algorithms, they would reduce their human targets – whether civilians or soldiers – to objects, undermining their human dignity.</p>
<h2>Dictates of public conscience</h2>
<p>The growing opposition to fully autonomous weapons shows that they also conflict with the dictates of public conscience. Governments, experts and the general public have all objected, often on moral grounds, to the possibility of losing human control over the use of force.</p>
<p>To date, <a href="https://www.stopkillerrobots.org/wp-content/uploads/2018/04/KRC_CountryViews_13Apr2018.pdf">26 countries</a> have expressly supported a ban, including China. <a href="https://www.theguardian.com/commentisfree/2018/apr/11/killer-robot-weapons-autonomous-ai-warfare-un">Most countries</a> that have spoken at the U.N. meetings on conventional weapons have called for maintaining some form of meaningful human control over the use of force. Requiring such control is effectively the same as banning weapons that operate without a person who decides when to kill.</p>
<p>Thousands of <a href="https://futureoflife.org/open-letter-autonomous-weapons/">scientists and artificial intelligence experts</a> have endorsed a prohibition and demanded action from the United Nations. In July 2018, they issued a <a href="https://futureoflife.org/lethal-autonomous-weapons-pledge/">pledge not to assist</a> with the development or use of fully autonomous weapons. <a href="https://www.clearpathrobotics.com/2014/08/clearpath-takes-stance-against-killer-robots/">Major corporations</a> have also called for the prohibition.</p>
<p>More than 160 <a href="https://www.paxforpeace.nl/stay-informed/news/religious-leaders-call-for-a-ban-on-killer-robots">faith leaders</a> and more than 20 <a href="https://nobelwomensinitiative.org/nobel-peace-laureates-call-for-preemptive-ban-on-killer-robots/?ref=204">Nobel Peace Prize laureates</a> have similarly condemned the technology and backed a ban. Several <a href="http://www.openroboethics.org/wp-content/uploads/2015/11/ORi_LAWS2015.pdf">international</a> and <a href="http://duckofminerva.dreamhosters.com/wp-content/uploads/2013/06/UMass-Survey_Public-Opinion-on-Autonomous-Weapons.pdf">national</a> public opinion polls have found that a majority of people who responded opposed developing and using fully autonomous weapons.</p>
<p>The <a href="https://www.stopkillerrobots.org/">Campaign to Stop Killer Robots</a>, a coalition of 75 nongovernmental organizations from 42 countries, has led opposition by nongovernmental groups. Human Rights Watch, for which I work, co-founded and coordinates the campaign.</p>
<h2>Other problems with killer robots</h2>
<p>Fully autonomous weapons would <a href="https://www.hrw.org/sites/default/files/report_pdf/arms1216_web.pdf">threaten more</a> than humanity and the public conscience. They would likely violate other key rules of international law. Their use would create a gap in accountability because no one could be held individually liable for the unforeseeable actions of an autonomous robot.</p>
<p>Furthermore, the existence of killer robots would spark widespread proliferation and an arms race – dangerous developments made worse by the fact that fully autonomous weapons would be vulnerable to hacking or technological failures.</p>
<p>Bolstering the case for a ban, our Martens Clause assessment highlights in particular how delegating life-and-death decisions to machines would violate core human values. Our report finds that there should always be meaningful human control over the use of force. We urge countries at this U.N. meeting to work toward a new treaty that would save people from lethal attacks made without human judgment or compassion. A clear ban on fully autonomous weapons would reinforce the longstanding moral and legal foundations of international humanitarian law articulated in the Martens Clause.</p><img src="https://counter.theconversation.com/content/101427/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bonnie Docherty works as a senior researcher in the Arms Division of Human Rights Watch.</span></em></p>A standard element of international humanitarian law since 1899 should guide countries as they consider banning lethal autonomous weapons systems.Bonnie Docherty, Lecturer on Law and Associate Director of Armed Conflict and Civilian Protection, International Human Rights Clinic, Harvard Law School, Harvard UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/876992017-11-28T11:54:20Z2017-11-28T11:54:20ZShould we fear the rise of drone assassins? Two experts debate<figure><img src="https://images.theconversation.com/files/196517/original/file-20171127-2025-26w2y0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">shutterstock</span> </figcaption></figure><p><em>A new short film from the <a href="https://www.stopkillerrobots.org/">Campaign Against Killer Robots</a> warns of a future where weaponised flying drones target and assassinate certain members of the public, using facial recognition technology to identify them. Is this a realistic threat that could rightly spur an effective ban on the technology? Or is it an overblown portrayal designed to scare governments into taking simplistic, unnecessary and ultimately futile action? We asked two academics for their expert opinions.</em></p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/HipTO_7mUOw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Overactive imagination risks panic and distress</h2>
<p><em>Peter Lee is a Reader in Politics and Ethics and Theme Director for Security and Risk Research and Innovation at the University of Portsmouth.</em></p>
<p>The newly released short film offers a bleak dystopia with humans at the mercy of “slaughterbots”. These are autonomous micro-drones with cameras, facial recognition software and lethal explosive charges. Utterly terrifying, and – the film claims – not science fiction but a near-future scenario that really could happen. The film warns with a frightening, deep voice: “They cannot be stopped.” The only salvation from this impending hell is, it is suggested, to ban killer robots. </p>
<p>This imaginative use of film to scare its viewers into action is the 21st-century version of the panic that HG Wells’s science fiction writings created in the early 20th century. New technologies can almost always be used for malevolent purposes but those same technologies – in this case flying robots, facial recognition, autonomous decision-making – can also drive widespread human benefit.</p>
<p>What about the killing part? Yes, three grams of explosive to the head could kill someone. But why go to the expense and trouble of making a lethal micro-drone? Such posturing about the widespread use of targeted, single-shot flying robots is a self-indulgence of technologically advanced societies. It would be hugely costly to develop such selective killing capability for use on a mass scale – certainly outside the capacity of terrorist organisations and, indeed, most militaries.</p>
<p>By comparison, in Rwanda in 1994, <a href="http://www.bbc.co.uk/news/world-africa-26875506">850,000 people</a> were killed in three months, mainly by machetes and garden tools. A <a href="https://theconversation.com/uk/topics/las-vegas-shooting-2017-44158">shooter in Las Vegas</a> killed at least 59 people and wounded more than 500 in only a few minutes. Meanwhile, in Germany, France and the UK, dozens of innocent people have been killed by terrorists using ordinary vehicles to commit murder. Cheap, easy and impossible to ban.</p>
<p>Bombing from aircraft was not outlawed at the <a href="http://www.airpowerstudies.co.uk/sitebuildercontent/sitebuilderfiles/aprvol13no3.pdf">1922-23 Peace Convention</a> at The Hague because governments didn’t want to surrender the security advantages it offered. Similarly, no government will want to relinquish the potential military benefit from drone technology.</p>
<p>Over-dramatic films and active imaginations might well cause panic and distress. But what is really needed is calm discussion and serious debate to put pressure on governments to use new technologies in ways that are beneficial to humankind – not ban them altogether. And where there are military applications, they should follow existing Laws of Armed Conflict and Geneva Conventions.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=446&fit=crop&dpr=1 600w, https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=446&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=446&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=560&fit=crop&dpr=1 754w, https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=560&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/196688/original/file-20171128-7485-lzviv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=560&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Here come the drones.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>A wake-up call on how robots could change conflicts</h2>
<p><em>Steve Wright is a Reader in the Politics and International Relations Group at Leeds Beckett University and a member of the International Campaign for Armed Robot Control.</em></p>
<p>The <a href="https://www.stopkillerrobots.org/">Campaign Against Killer Robots</a>’ terrifying new short film “Slaughterbots” predicts a new age of warfare and automated assassinations, if weapons that decide for themselves who to kill are not banned. The organisation hopes to pressure the UN to outlaw lethal robots under the <a href="https://www.un.org/disarmament/geneva/ccw/">Convention on Certain Conventional Weapons</a> (CCW), which has previously banned <a href="http://www.un.org/Depts/mine/UNDocs/ban_trty.htm">antipersonnel landmines</a>, <a href="http://www.article36.org/weapons/cluster-munitions/cluster-munitions-and-the-ccw/335/">cluster munitions</a> and <a href="http://www.weaponslaw.org/instruments/1995-protocol-on-blinding-laser-weapons">blinding lasers on the battlefield</a>.</p>
<p>Some have suggested that the new film is scaremongering. But the technologies needed to build such autonomous weapons – <a href="https://www.rt.com/news/395375-kalashnikov-automated-neural-network-gun/">intelligent targeting</a> algorithms, geo-location, facial recognition – <a href="https://icrac.net/2017/11/icrac-statement-at-the-2017-ccw-gge-meeting/">are already with us</a>. Many <a href="http://www.defenseone.com/technology/2014/10/inside-navys-secret-swarm-robot-experiment/95813/?oref=d-skybox">existing lethal drone systems</a> only operate in a semi-autonomous mode because of legal constraints and could do much more if allowed. It won’t take much to develop the technology so it has the capabilities shown in the film. </p>
<p>Perhaps the best way to see the film is less a realistic portrayal of how this technology will be used without a ban and more a wake-up call about how it could change conflicts. For some time to come, small arms and light weapons will remain the major instruments of political violence. But the film highlights how the intelligent targeting systems supposedly designed to minimise causalities could be used for a selective cull of an entire city. It’s easy to imagine how this might be put to use in a sectarian or ethnic conflict.</p>
<p>No international ban on inhumane weapons is absolutely watertight. The cluster munitions treaty <a href="https://www.nytimes.com/2016/09/02/world/middleeast/cluster-bombs-syria-yemen.html">has not prevented</a> Russia from using them in Syria, or Saudi Arabia bombing Yemeni civilians with <a href="https://www.theguardian.com/uk-news/2016/dec/18/uk-cluster-bombs-used-in-yemen-by-saudi-arabia-finds-research">old British stock</a>. But the landmine treaty has <a href="http://www.article36.org/weapons/landmines/fifteen-years-after-the-landmine-ban-the-number-of-new-casualties-halves/">halved the estimated number</a> of casualties – and even some of those states that have not ratified the ban, such as the US, now act as if they have. A ban on killer robots could have a similar effect.</p>
<p>Similarly, a ban might not remove all chance of terrorists using these weapons. The international arms market is too promiscuous. But it would remove potential stockpiles of killer robots by forcing governments to limit their manufacture.</p>
<p>Some have argued armed robotic systems might actually help reduce suffering in war since they don’t get tired, abuse captives, or act in self-defence or revenge. <a href="https://www.cc.gatech.edu/ai/robot-lab/online-publications/GIT-GVU-09-02.pdf">They believe</a> that autonomous weapons could be programmed to uphold international law better than humans do.</p>
<p>But, as Prof Noel Sharkey of the <a href="https://icrac.net/">International Campaign for Armed Robot Control</a> points out, this view is based on the fantasy of robots being super smart terminators when today “they have the <a href="http://moralmachines.blogspot.co.uk/2008/12/killer-robots-or-friendly-fridges.html">intelligence of a fridge</a>”. While the technology to enable killer robots exists, without the technology to restrain them, a ban is our best hope of avoiding the kind of scenario shown in the film.</p><img src="https://counter.theconversation.com/content/87699/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Steve Wright is affiliated with the International Campaign for Armed Robot Control (ICRAC). </span></em></p><p class="fine-print"><em><span>Peter Lee does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The Campaign Against Killer Robots has launched a terrifying film showing why lethal drones need to be banned.Peter Lee, Reader in Politics and Ethics, University of PortsmouthSteve Wright, Reader, Politics and International Relations Group, Leeds Beckett UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/867582017-11-06T19:21:32Z2017-11-06T19:21:32ZDear Prime Minister: we’d like you to join the call for a ban on killer robots<figure><img src="https://images.theconversation.com/files/193335/original/file-20171106-1008-l1e2hq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A US Air Force MQ-9 Reaper drone is piloted remotely over Afghanistan. But what if AI was to take control? </span> <span class="attribution"><a class="source" href="http://www.afrc.af.mil/News/Photos/igphoto/2000608254/">US Air Force Photo/Lt Col Leslie Pratt.</a></span></figcaption></figure><p>Leading researchers in robotics and artificial intelligence (AI) from Australia and Canada have today published open letters calling on their respective Prime Ministers to take a stand against weaponising AI.</p>
<p>The letters ask that Australia and Canada be the next countries to call for a ban on lethal autonomous weapons at the upcoming United Nations (UN) disarmament conference, the strangely named Conference on the Convention on Certain Conventional Weapons (<a href="https://www.un.org/disarmament/geneva/ccw/">CCW</a>) to be held in Geneva later this month.</p>
<p>To date, 19 countries have called for a pre-emptive ban on autonomous weapons: Algeria, Argentina, Bolivia, Chile, Costa Rica, Cuba, Ecuador, Egypt, Ghana, Guatemala, Holy See, Mexico, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Venezuela and Zimbabwe. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/no-problem-too-big-1-artificial-intelligence-and-killer-robots-77957">No problem too big #1: Artificial intelligence and killer robots</a>
</strong>
</em>
</p>
<hr>
<h2>Before Terminator</h2>
<p>Lethal autonomous weapons are often described as “killer robots”. This paints a deceptive picture in most people’s minds. </p>
<p>We’re not talking about a movie-style Terminator, but rather much simpler technologies that are potentially only a few years away. Think of a predator drone flying above the skies of Iraq but replace the human pilot with a computer. Now, a computer could make the final life or death decision to fire its Hellfire missile.</p>
<p>I’m most worried not about smart AI but stupid AI. We will be giving machines the right to make such life-or-death decisions, but current technologies are not capable of making such decisions correctly. </p>
<p>In the longer term, autonomous weapons will become more capable. But my concern then shifts to how such weapons will destabilise the geopolitical order and ultimately become another weapon of mass destruction. </p>
<p>The Australian letter was released simultaneously with one signed by hundreds of AI experts in Canada, including two of the founders of Deep Learning, AI pioneers <a href="http://www.cs.toronto.edu/%7Ehinton/">Geoffrey Hinton</a> and <a href="http://www.iro.umontreal.ca/%7Ebengioy/yoshua_en/">Yoshua Bengio</a>. The Canadian letter urges Prime Minister Justin Trudeau to support such a ban.</p>
<p>In the interest of full disclosure, I organised the <a href="https://www.cse.unsw.edu.au/%7Etw/letter.pdf">Australian letter</a>. It is signed by a dozen or so Deans and Heads of Schools, as well as dozens of professors of AI and robotics. In total 122 faculty members working in AI and robotics in Australia have signed the letter. </p>
<p>The letter says lethal autonomous weapons lacking meaningful human control sit on the wrong side of a clear moral line. It adds:</p>
<blockquote>
<p>To this end, we ask Australia to announce its support for the call to ban lethal autonomous weapons systems at the upcoming UN Conference on CCW. Australia should also commit to working with other states to conclude a new international agreement that achieves this objective.</p>
<p>In this way, our government can reclaim its position of moral leadership on the world stage as demonstrated previously in other areas like the non-proliferation of nuclear weapons.</p>
<p>With Australia’s recent election to the UN’s Human Rights Council, the issue of lethal autonomous weapons is even more pressing for Australia to address.</p>
</blockquote>
<h2>Support is growing</h2>
<p>The AI and robotics communities have sent a clear and consistent message over the past couple of years about this issue. In 2015, thousands of AI and robotics researchers from around the world signed <a href="https://theconversation.com/open-letter-we-must-stop-killer-robots-before-they-are-built-44577">an open letter released at the start of the main AI conference</a> calling for a ban. </p>
<p>Most recently, industry joined the call when in August this year <a href="https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war">more than 100 founders of AI and robotics companies warned</a> of opening “the Pandora’s box” and asking the UN to take urgent action.</p>
<p>The UN is listening and taking action, though like all things diplomatic, progress is not rapid. In December 2016, after three years of informal talks, the UN decided to begin formal discussions within a <a href="https://www.un.org/disarmament/geneva/ccw/meetings-of-the-gge/">Group of Governmental Experts</a>. As the name suggests, this is a group of technical, legal and political experts chosen by the member states to make recommendations about autonomous weapons that could contribute to but not negotiate a treaty banning their use.</p>
<p>This group meets for the first time in Geneva next Monday. They will discuss topics such as should autonomous weapons always have “meaningful human control”? and what does this mean in practice? </p>
<h2>An AI arms race</h2>
<p>The international non-government body <a href="https://www.hrw.org/">Human Rights Watch</a> has invited me to the meeting and I will speak about the dangers of not taking action to ban autonomous weapons. Without a ban, there will be an arms race to develop increasingly capable autonomous weapons.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-we-signed-the-open-letter-from-scientists-supporting-a-total-ban-on-nuclear-weapons-75209">Why we signed the open letter from scientists supporting a total ban on nuclear weapons</a>
</strong>
</em>
</p>
<hr>
<p>This has rightly been <a href="https://futureoflife.org/autonomous-weapons-open-letter-2017">described as the third revolution in warfare</a>. The first revolution was the invention of gunpowder. The second was the invention of nuclear bombs. This third revolution would be another step change in the speed and efficiency with which we could kill. </p>
<p>For these will be weapons of mass destruction. One programmer will be able to control a whole army. Every other weapon of mass destruction has been banned or is in the process of being banned: <a href="https://www.un.org/disarmament/wmd/chemical/">chemical weapons</a> and <a href="https://www.un.org/disarmament/wmd/bio/">biological weapons</a> are banned, and a <a href="https://theconversation.com/were-close-to-banning-nuclear-weapons-killer-robots-must-be-next-80741">nuclear weapons treaty</a> recently reached the 50 signatures required to become law. We must add autonomous weapons to the list of weapons that are morally unacceptable to use.</p>
<p>We cannot stop AI technology being developed. It will be used for many peaceful purposes like autonomous cars. But we can make it morally unacceptable to use to kill, as we have decided with chemical and biological weapons.</p>
<p>This, I hope, will make the world a safer and better place.</p><img src="https://counter.theconversation.com/content/86758/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Leading experts in AI and robotics want the Prime Ministers of Australia and Canada to join the growing campaign to ban killer robots.Toby Walsh, Research Group Leader at Data61, Professor of AI, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/827542017-08-29T04:23:04Z2017-08-29T04:23:04ZArtificial intelligence researchers must learn ethics<p>Scientists who build artificial intelligence and autonomous systems need a strong ethical understanding of the impact their work could have.</p>
<p>More than 100 technology pioneers recently published an <a href="https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war">open letter to the United Nations</a> on the topic of lethal autonomous weapons, or “killer robots”. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-to-make-robots-that-we-can-trust-79525">How to make robots that we can trust</a>
</strong>
</em>
</p>
<hr>
<p>These people, including the entrepreneur Elon Musk and the founders of several robotics companies, are part of an effort that <a href="https://futureoflife.org/open-letter-autonomous-weapons/">began in 2015</a>. The original letter called for an end to an arms race that it claimed could be the “third revolution in warfare, after gunpowder and nuclear arms”.</p>
<p>The UN has a role to play, but responsibility for the future of these systems also needs to begin in the lab. The education system that trains our AI researchers needs to school them in ethics as well as coding.</p>
<h2>Autonomy in AI</h2>
<p>Autonomous systems can make decisions for themselves, with little to no input from humans. This greatly increases the usefulness of robots and similar devices. </p>
<p>For example, an autonomous delivery drone only requires the delivery address, and can then work out for itself the best route to take – overcoming any obstacles that it may encounter along the way, such as adverse weather or a flock of curious seagulls.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=334&fit=crop&dpr=1 600w, https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=334&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=334&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=420&fit=crop&dpr=1 754w, https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=420&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=420&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Drones deliver more than just food.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/routeplanning/33751810990/in/photolist-TqwRvG-TP22fV-TqwRE9-TqwRQQ-9NvFDv-TWXUMN-TqwR13-TqwRjQ-TqwQF5-TqwRaS-TqwQyS-nxAbfy-q3FsGo-qkeRYv-q7bQta-STZJcB-9ND8dE-STZJGK-U8xYB1-TzWW3F-S66Moi-T69yr1-S66MyD-VADYoe-S66MKa-S66MUZ-S66MRT-oqrUG8-nxYm9s-j5xuXW-ni9oXC-4EX5m8-pouax8-dBJHYu-ikvXgY-uU2e1x-8mpYBw-B9vcqD-5Y9TAs-S66MH6-S66MDZ-v5z1pV-VscVKh-JXgHJQ-T1EWSV-RBtfBF-xRuSuf-xUtfyM-L5XAhB-xSXuwU">www.routexl.com</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA</a></span>
</figcaption>
</figure>
<p>There has been a great deal of research into autonomous systems, and delivery drones are currently being developed by companies such as <a href="https://thenextweb.com/tech/2017/08/24/amazon-patent-details-the-scary-future-of-drone-delivery/#.tnw_1oUtjT67">Amazon</a>. Clearly, the same technology could easily be used to make deliveries that are significantly nastier than food or books. </p>
<p>Drones are also becoming smaller, cheaper and more robust, which means it will soon be feasible for flying armies of thousands of drones to be manufactured and deployed. </p>
<p>The potential for the deployment of weapons systems like this, largely decoupled from human control, prompted the letter urging the UN to “find a way to protect us all from these dangers”.</p>
<h2>Ethics and reasoning</h2>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=901&fit=crop&dpr=1 600w, https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=901&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=901&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1133&fit=crop&dpr=1 754w, https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1133&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1133&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Thomas Aquinas.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/File:Carlo_Crivelli_007.jpg">Wikipedia Commons</a></span>
</figcaption>
</figure>
<p>Whatever your opinion of such weapons systems, the issue highlights the need for consideration of ethical issues in AI research. </p>
<p>As in most areas of science, acquiring the necessary depth to make contributions to the world’s knowledge requires focusing on a specific topic. Often researchers are experts in relatively narrow areas, and may lack any formal training in ethics or moral reasoning. </p>
<p>It is precisely this kind of reasoning that is increasingly required. For example, driverless cars, which are <a href="http://fortune.com/2017/01/20/self-driving-test-sites/">being tested in the US</a>, will need to be able to make judgements about potentially dangerous situations.</p>
<p>For instance, how should it react if a cat unexpectedly crosses the road? Is it better to run over the cat, or to swerve sharply to avoid it, risking injury to the car’s occupants? </p>
<p>Hopefully such cases will be rare, but the car will need to be designed with some specific principles in mind to guide its decision making. As Virginia Dignum put it when delivering her paper “<a href="https://www.ijcai.org/proceedings/2017/0655.pdf">Responsible Autonomy</a>” at the recent International Joint Conference on Artificial Intelligence (<a href="https://ijcai-17.org/">IJCAI</a>) in Melbourne: </p>
<blockquote>
<p>The driverless car will have ethics; the question is whose? </p>
</blockquote>
<p>A similar theme was explored in the paper “<a href="https://www.ijcai.org/proceedings/2017/0658.pdf">Automating the Doctrine of Double Effect</a>” by Naveen Sundar Govindarajulu and Selmer Bringsjord. </p>
<p>The <a href="https://plato.stanford.edu/entries/double-effect/">Doctrine of Double Effect</a> is a means of reasoning about moral issues, such as the right to self-defence under particular circumstances, and is credited to the 13th-century Catholic scholar <a href="http://www.iep.utm.edu/aquinas/">Thomas Aquinas</a>. </p>
<p>The name Double Effect comes from obtaining a good effect (such as saving someone’s life) as well as a bad effect (harming someone else in the process). This is a way to justify actions such as a drone shooting at a car that is running down pedestrians.</p>
<h2>What does this mean for education?</h2>
<p>The emergence of ethics as a topic for discussion in AI research suggests that we should also consider how we prepare students for a world in which autonomous systems are increasingly common. </p>
<p>The need for “<a href="http://stemfoundation.org.uk/asset/resource/%7B3EA5228A-B620-4783-AE91-190F2C182DAA%7D/resource.pdf">T-shaped</a>” people has been recently established. Companies are now looking for graduates not just with a specific area of technical depth (the vertical stroke of the T), but also with professional skills and personal qualities (the horizontal stroke). Combined, they are able to see problems from different perspectives and work effectively in multidisciplinary teams. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=565&fit=crop&dpr=1 754w, https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=565&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=565&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A Google self-driving car.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/romanboed/9572198632/in/photolist-fzS2as-dAtLrm-8XUCAY-ohAYe7-dW5TrG-eBWbNx-pehkzt-oQLPXk-bGdwo-ohhjHg-iKrUr5-bGdwn-fCyoaC-izMgC2-aMbnU2-g4P8w-qGxMtu-6rdujU-oSLAA2-eFeCPe-hRrw8M-aeaaQy-ENwDQj-wpZxdf-z62NVA-o7T6qb-dUQ5qV-9Fiind-7zWzs-embVPp-oi47W2-a8KbPa-QAXtnR-qPTpog-dUFy7q-druBYw-6NKSvq-92iCEB-4SewKg-rntpFL-9JwsGh-VSuL1f-9o1FrD-eb2sof-aUXTD8-WpavKu-csawD3-zdgRMN-RgKb9k-a7mSqE">Roman Boed</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>Most undergraduate courses in computer science and similar disciplines include a course on professional ethics and practice. These are usually focused on intellectual property, copyright, patents and privacy issues, which are certainly important. </p>
<p>However, it seems clear from the discussions at IJCAI that there is an emerging need for additional material on broader ethical issues. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/never-mind-killer-robots-even-the-good-ones-are-scarily-unpredictable-82963">Never mind killer robots – even the good ones are scarily unpredictable</a>
</strong>
</em>
</p>
<hr>
<p>Topics could include methods for determining the lesser of two evils, legal concepts such as criminal negligence, and the historical effect of technology on society.</p>
<p>The key point is to enable graduates to integrate ethical and societal perspectives into their work from the very beginning. It also seems appropriate to require research proposals to demonstrate how ethical considerations have been incorporated. </p>
<p>As AI becomes more widely and deeply embedded in everyday life, it is imperative that technologists understand the society in which they live and the effect their inventions may have on it.</p><img src="https://counter.theconversation.com/content/82754/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James Harland does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Technologists need to understand the society in which they live, and the effect their inventions could have on it.James Harland, Associate Professor in Computational Logic, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/820352017-08-28T02:46:09Z2017-08-28T02:46:09ZArtificial intelligence cyber attacks are coming – but what does that mean?<figure><img src="https://images.theconversation.com/files/182112/original/file-20170815-18355-4q1mez.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Hackers will start to get help from robots and artificial intelligence soon.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/artificial-intelligence-hand-type-on-keyboard-539711077">Jinning Li/Shutterstock.com</a></span></figcaption></figure><p>The next major cyberattack could involve artificial intelligence systems. It could even happen soon: At a recent cybersecurity conference, 62 industry professionals, <a href="https://www.cylance.com/en_us/blog/black-hat-attendees-see-ai-as-double-edged-sword.html">out of the 100 questioned</a>, said they thought the first AI-enhanced cyberattack could come in the next 12 months.</p>
<p>This doesn’t mean robots will be marching down Main Street. Rather, artificial intelligence will make existing cyberattack efforts – things like identity theft, denial-of-service attacks and password cracking – more powerful and more efficient. This is dangerous enough – this type of hacking can steal money, <a href="https://www.equifax.com/assets/PSOL/15-9814_psol_emotionalToll_wp.pdf">cause emotional harm</a> and even <a href="https://www.wired.com/2016/08/jeep-hackers-return-high-speed-steering-acceleration-hacks/">injure or kill people</a>. Larger attacks can <a href="https://doi.org/10.1109/JPROC.2011.2165269">cut power</a> to <a href="http://dx.doi.org/10.1111/risa.12844">hundreds of thousands of people</a>, <a href="https://theconversation.com/the-petya-ransomware-attack-shows-how-many-people-still-dont-install-software-updates-77667">shut down hospitals</a> and even <a href="http://dx.doi.org/10.1111/risa.12844">affect national security</a>. </p>
<p>As a scholar who has <a href="https://doi.org/10.1016/j.techsoc.2013.12.004">studied AI decision-making</a>, I can tell you that interpreting human actions is still difficult for AI’s and that humans <a href="https://theconversation.com/finding-trust-and-understanding-in-autonomous-technologies-70245">don’t really trust AI systems</a> to make major decisions. So, unlike in the movies, the capabilities AI could bring to cyberattacks – and cyberdefense – are not likely to immediately involve computers choosing targets and attacking them on their own. People will still have to create attack AI systems, and launch them at particular targets. But nevertheless, adding AI to today’s cybercrime and cybersecurity world will <a href="https://doi.org/10.1016/j.techsoc.2015.12.003">escalate</a> what is already a rapidly changing arms race between attackers and defenders.</p>
<h2>Faster attacks</h2>
<p>Beyond computers’ lack of need for food and sleep – needs that limit human hackers’ efforts, even when they work in teams – automation can make complex attacks much faster and more effective. </p>
<p>To date, the effects of automation have been limited. Very rudimentary AI-like capabilities have for decades given virus programs <a href="https://www.cisco.com/c/en/us/about/security-center/virus-differences.html">the ability to self-replicate</a>, spreading from computer to computer without specific human instructions. In addition, programmers have used their skills to automate different elements of hacking efforts. Distributed attacks, for example, involve triggering a remote program on several computers or devices to overwhelm servers. The attack that <a href="https://www.welivesecurity.com/2016/10/24/10-things-know-october-21-iot-ddos-attacks/">shut down large sections of the internet in October 2016</a> used this type of approach. In some cases, common attacks are made available as a script that allows an unsophisticated user to choose a target and launch an attack against it.</p>
<p>AI, however, could help human cybercriminals customize attacks. <a href="https://theconversation.com/spearphishing-roiled-the-presidential-campaign-heres-how-to-protect-yourself-68274">Spearphishing attacks</a>, for instance, require attackers to have personal information about prospective targets, details like where they bank or what medical insurance company they use. AI systems can help gather, organize and process large databases to connect identifying information, making this type of attack easier and faster to carry out. That reduced workload may drive thieves to launch lots of smaller attacks that go unnoticed for a long period of time – if detected at all – due to their more limited impact.</p>
<p>AI systems could even be used to pull information together from multiple sources to identify people who would be particularly vulnerable to attack. Someone who is hospitalized or in a nursing home, for example, might not notice money missing out of their account until long after the thief has gotten away.</p>
<h2>Improved adaptation</h2>
<p>AI-enabled attackers will also be much faster to react when they encounter resistance, or when cybersecurity experts fix weaknesses that had previously allowed entry by unauthorized users. The AI may be able to exploit another vulnerability, or start scanning for new ways into the system – without waiting for human instructions. </p>
<p>This could mean that human responders and defenders find themselves unable to keep up with the speed of incoming attacks. It may result in a <a href="https://doi.org/10.1016/j.techsoc.2015.12.003">programming and technological arms race</a>, with defenders developing AI assistants to identify and protect against attacks – or perhaps even AI’s with <a href="https://theconversation.com/cybersecuritys-next-phase-cyber-deterrence-67090">retaliatory attack capabilities</a>.</p>
<h2>Avoiding the dangers</h2>
<p>Operating autonomously could lead AI systems to attack a system it shouldn’t, or <a href="https://www.theguardian.com/world/2015/jul/02/robot-kills-worker-at-volkswagen-plant-in-germany">cause unexpected damage</a>. For example, software started by an attacker intending only to steal money might decide to target a hospital computer in a way that causes human injury or death. The potential for <a href="https://doi.org/10.2139/ssrn.2283767">unmanned aerial vehicles to operate autonomously</a> has raised similar questions of the need for <a href="https://theconversation.com/losing-control-the-dangers-of-killer-robots-58262">humans to make the decisions about targets</a>. </p>
<p>The consequences and implications are significant, but most people won’t notice a big change when the first AI attack is unleashed. For most of those affected, the outcome will be the same as human-triggered attacks. But as we continue to fill our homes, factories, offices and roads with internet-connected robotic systems, the potential effects of an attack by artificial intelligence only grows.</p><img src="https://counter.theconversation.com/content/82035/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jeremy Straub is the Associate Director of the NDSU Institute for Cyber Security Education and Research. </span></em></p>It won’t be like an army of robots marching in the streets, but AI hacking is on the horizon.Jeremy Straub, Assistant Professor of Computer Science, North Dakota State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/829632017-08-25T12:20:46Z2017-08-25T12:20:46ZNever mind killer robots – even the good ones are scarily unpredictable<figure><img src="https://images.theconversation.com/files/183350/original/file-20170824-18702-1fxlsp9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Who could have predicted it would end like this?</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>The heads of more than 100 of the world’s top artificial intelligence companies are very alarmed about the development of “killer robots”. In an <a href="https://futureoflife.org/autonomous-weapons-open-letter-2017">open letter</a> to the UN, these business leaders – including Tesla’s Elon Musk and the founders of Google’s DeepMind AI firm – warned that autonomous weapon technology could be misused by terrorists and despots or hacked to perform in undesirable ways.</p>
<p>But the real threat is much bigger – and not just from human misconduct but from the machines themselves. The research into complex systems shows how behaviour can emerge that is much more unpredictable than the sum of individual actions. On one level this means human societies can behave very differently to what you might expect just looking at individual behaviour. But it can also apply to technology. Even ecosystems of relatively simple AI programs – what we call stupid, good bots – can surprise us, and even when the individual bots are behaving well.</p>
<p>The individual elements that make up complex systems, such as economic markets or global weather, tend not to interact in a simple linear way. This make these systems very hard to model and understand. For example, even after many years of climatology, it’s still impossible to make long-term weather predictions. These systems are often very sensitive to small changes and can experience explosive feedback loops. It is also very difficult to know the precise state of such a system at any one time. All these things make these systems intrinsically unpredictable. </p>
<p>All these principles apply to large groups of individuals acting in their own way, whether that’s human societies or groups of AI bots. My colleagues and I <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0171774">recently studied</a> one type of a complex system that featured good bots used to automatically edit Wikipedia articles. These different bots are designed and exploited by Wikipedia’s trusted human editors and their underlying software is open-source and available for anyone to study. Individually, they all have a common goal of improving the encyclopaedia. Yet their collective behaviour turns out to be surprisingly inefficient.</p>
<p>These Wikipedia bots work based on well-established rules and conventions, but because the website doesn’t have a central management system there is no effective coordination between the people running different bots. As a result, we found pairs of bots that have been undoing each other’s edits for several years without anyone noticing. And of course, because these bots lack any cognition, they didn’t notice it either.</p>
<p>The bots are designed to speed up the editing process. But slight differences in the design of the bots or between people who use them can lead to a massive waste of resources in an ongoing “edit war” that would have been resolved much quicker with human editors.</p>
<p>We also found that the bots behaved differently in different language editions of Wikipedia. The rules are more or less the same, the goals are identical, the technology is similar. But in German Wikipedia, the collaboration between bots is much more efficient and productive compared to, for example, Portuguese Wikipedia. This can only be explained by the differences between the human editors who run these bots in different environments.</p>
<h2>Exponential confusion</h2>
<p>Wikipedia bots have very little autonomy and the system already operates very differently to the goals of individual bots. But the Wikimedia Foundation is <a href="https://blog.wikimedia.org/2017/07/19/scoring-platform-team/">planning to use</a> AI that will give more autonomy to the bots. That will likely lead to even more unexpected behaviour. </p>
<p>Another example is what can happen when two bots designed to speak to humans interact with each other. We’re no longer surprised by the answers given by artificial personal assistants such as the iPhone’s Siri. But put several of these kind of chatbots together and they can quickly start acting in surprising ways, arguing and even insulting each other. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/WnzlbyTZsQY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>The bigger the system becomes and the more autonomous each bot is, the more complex and hence unpredictable the future behaviour of the system will be. Wikipedia is an example of large number of relatively simple bots. The chatbots example is a small number of rather sophisticated and creative bots – in both cases unexpected conflicts emerged. The complexity and therefore unpredictability increases exponentially as you add more and more individuals to the system. So in a future system with a large number of very sophisticated robots, the unexpected behaviour could go beyond our imagination.</p>
<h2>Self-driving madness</h2>
<p>For example, self-driving cars promise exciting advances in the efficiency and safety of road travel. But we don’t yet know what will happen once we have a large, wild system of fully autonomous vehicles. They may well behave very differently to a small set of individual cars in a controlled environment. And even more unexpected behaviour might occur when driverless cars “trained” by different humans in different environments start interacting with each another.</p>
<p>Humans can adapt to new rules and conventions relatively quickly but can still have trouble switching between systems. This can be way more difficult for artificial agents. If a “German-trained” car was driving in Italy, for example, we just don’t know how it would deal with the written rules and unwritten cultural conventions being followed by the many other “Italian-trained” cars. Something as common as crossing an intersection could become lethally risky because we just wouldn’t know if the cars would interact as they were supposed to or whether they would do something completely unpredictable.</p>
<p>Now think of the killer robots that Elon Musk and his colleagues are worried about. A single killer robot could be very dangerous in wrong hands. But what about an unpredictable system of killer robots? I don’t even want to think about it.</p><img src="https://counter.theconversation.com/content/82963/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Taha Yasseri receives funding from the European Commission and Google. </span></em></p>The unexpected behaviour of even simple bots is only going to get more dramatic as AI scales up.Taha Yasseri, Research Fellow in Computational Social Science, Oxford Internet Institute, University of OxfordLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/807412017-07-12T06:40:42Z2017-07-12T06:40:42ZWe’re close to banning nuclear weapons – killer robots must be next<figure><img src="https://images.theconversation.com/files/177631/original/file-20170710-26770-1ailwor.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">International flags fly at United Nations headquarters, New York City. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/new-york-ny-usa-september-24-488226595?src=Wz3SZJ8USBb_JrBWiWyoqg-1-1">Osugi/shutterstock </a></span></figcaption></figure><p>While much of the world’s attention was focused last week on the G20 meeting in Hamburg, and Donald Trump’s first face-to-face meeting with Vladimir Putin, a historic decision took place at the United Nations (UN) in New York. </p>
<p>On Friday, 122 countries voted in favour of the “<a href="http://www.un.org/disarmament/ptnw/index.html">Treaty on the Prohibition of Nuclear Weapons</a>”. </p>
<p>Nuclear weapons were the only weapons of mass destruction without a treaty banning them, despite the fact that they are potentially the most
potent of all weapons. <a href="https://www.un.org/disarmament/wmd/bio/">Biological weapons were banned in 1975</a> and
<a href="https://www.un.org/disarmament/wmd/chemical/">chemical weapons in 1992</a>.</p>
<p>This new treaty sets the international norm that nuclear weapons are no longer morally acceptable. This is the first step along the road to their eventual elimination from our planet, although the issue of North Korea’s nuclear ambitions <a href="https://theconversation.com/as-an-historic-nuclear-weapons-treaty-is-reached-g20-leaders-miss-the-mark-on-north-korea-80464">remains unresolved</a>.</p>
<p>Earlier this year, thousands of scientists including 30 Nobel Prize winners signed an open letter calling for nuclear weapons to be banned. <a href="https://theconversation.com/why-we-signed-the-open-letter-from-scientists-supporting-a-total-ban-on-nuclear-weapons-75209">I was one</a> of the signees, and am pleased to see an outcome linked to this call so swiftly and resolutely answered. </p>
<p>More broadly, the nuclear weapon treaty offers hope for formal negotiations about lethal autonomous weapons (otherwise known as killer robots) due to start in the UN in November. Nineteen countries have <a href="http://www.stopkillerrobots.org/wp-content/uploads/2013/03/KRC_CountryViews_13Dec2016.pdf">already called for a pre-emptive ban on such weapons</a>, fearing they will be the next weapon of mass destruction that man will invent. </p>
<p>An arms race is underway to develop autonomous weapons, in every theatre of war. In the air, for instance, BAE Systems is prototyping their <a href="https://en.wikipedia.org/wiki/BAE_Systems_Taranis">Taranis drone</a>. On the sea, the US Navy has launched their first autonomnous ship, the <a href="https://en.wikipedia.org/wiki/Sea_Hunter">Sea Hunter</a>. And under the sea, Boeing has a working version of a 15 metre long <a href="http://www.boeing.com/features/2016/03/bds-echo-voyager-03-16.page">Echo Voyager autonomous submarine</a>. </p>
<h2>New treaty, new hope</h2>
<p>The nuclear weapons <a href="http://www.undocs.org/en/a/conf.229/2017/L.3/Rev.1">treaty</a> is an important step towards delegitimising nuclear weapons, and puts strong moral pressure on the nuclear states like the US, the UK and Russia to reduce and eventually to eliminate such weapons from their arsenals. The treaty also obliges states to support victims of the use and testing of nuclear weapons, and to address environmental damage caused by nuclear weapons.</p>
<p>It has to be noted that the talks at the UN and subsequent vote on the treaty were boycotted by <em>all</em> the nuclear states, as well as by a number of other countries. Australia has played a leading role in the nuclear non-proliferation treaty and other disarmament talks. Disappointingly Australia was one of these countries boycotting last week’s talks. In contrast, <a href="https://www.un.org/disarmament/ptnw/participants.html">New Zealand</a> played a leading role with their ambassador being one of the Vice-Presidents of the talks. </p>
<p>Whilst 122 countries voted for the treaty, one country (the Netherlands) voted against, and one (Singapore) abstained from the vote. </p>
<p>The treaty will open for signature by states at the United Nations in New York on September 20, 2017. It will then come into force once 50 states have signed. </p>
<p>Even though major states have boycotted previous disarmament treaties, this has not prevented the treaties having effect. The US, for instance, has never signed the <a href="http://www.un.org/Depts/mine/UNDocs/ban_trty.htm">1999 accord on anti-personnel landmines</a>, wishing to support South Korea’s use of such mines in the Demilitarized Zone (DMZ) with North Korea. Nevertheless, the <a href="https://obamawhitehouse.archives.gov/the-press-office/2014/06/27/fact-sheet-changes-us-anti-personnel-landmine-policy">US follows the accord</a> outside of the DMZ. </p>
<p>Given that 122 countries voted for the nuclear prohibition treaty, it is likely that 50 states will sign the treaty in short order, and that it will then come into force. And, as seen with the landmine accord, this will increase pressure on nuclear states like the US and Russia to reduce and perhaps even eliminate their nuclear stockpiles. </p>
<p>When the chemical weapons convention came into effect in 1993, <a href="https://www.opcw.org/fileadmin/OPCW/Fact_Sheets/English/Fact_Sheet_6_-_destruction.pdf">eight countries declared stockpiles</a>, which are now <a href="https://www.opcw.org/fileadmin/OPCW/CSP/C-21/en/c2104_e_.pdf">partially or completely eliminated</a>.</p>
<h2>Public pressure</h2>
<p>The vote also raises hope on the issue of killer robots. Two years ago, I and thousands of my colleagues signed an open letter calling for a <a href="https://theconversation.com/open-letter-we-must-stop-killer-robots-before-they-are-built-44577">ban on killer robots</a>. This pushed the issue up the agenda at the UN and helped get 123 nations to vote last December at the UN in Geneva for <a href="https://www.stopkillerrobots.org/2016/12/formal-talks/">the commencement of formal talks</a>.</p>
<p>The UN moves a little slowly at times. Nuclear disarmament is the longest sought objective of the UN, dating back to <a href="https://documents-dds-ny.un.org/doc/RESOLUTION/GEN/NR0/032/52/IMG/NR003252.pdf?OpenElement">the very first resolution adopted by the General Assembly in January 1946</a> shortly after nuclear bombs had been used by the US for the first time. Nevertheless, this is a hopeful moment in a time when hope is in short supply. </p>
<p>The UN does move in the right direction and countries can come together and act in our common interest. Bravo.</p><img src="https://counter.theconversation.com/content/80741/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Treaties banning biological and chemical weapons are in place, and the path is clear to remove nuclear weapons too. Lethal autonomous weapons (killer robots) should be next.Toby Walsh, Professor of AI at UNSW, Research Group Leader, Data61Licensed as Creative Commons – attribution, no derivatives.