tag:theconversation.com,2011:/global/topics/lethal-autonomous-weapons-systems-16030/articlesLethal Autonomous Weapons Systems – The Conversation2023-12-08T05:11:49Ztag:theconversation.com,2011:article/2193022023-12-08T05:11:49Z2023-12-08T05:11:49ZIsrael’s AI can produce 100 bombing targets a day in Gaza. Is this the future of war?<p>Last week, reports emerged that the Israel Defense Forces (IDF) are using an artificial intelligence (AI) system <a href="https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/">called Habsora</a> (Hebrew for “The Gospel”) to select targets in the war on Hamas in Gaza. The system has reportedly been used to <a href="https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets">find more targets for bombing</a>, to link locations to Hamas operatives, and to estimate likely numbers of civilian deaths in advance.</p>
<p>What does it mean for AI targeting systems like this to be used in conflict? My research into the social, political and ethical implications of military use of remote and autonomous systems shows AI is already altering the character of war. </p>
<p>Militaries use remote and autonomous systems as “force multipliers” to increase the impact of their troops and protect their soldiers’ lives. AI systems can make soldiers more efficient, and are likely to enhance the speed and lethality of warfare – even as humans become less visible on the battlefield, instead gathering intelligence and targeting from afar. </p>
<p>When militaries can kill at will, with little risk to their own soldiers, will the current ethical thinking about war prevail? Or will the increasing use of AI also increase the dehumanisation of adversaries and the disconnect between wars and the societies in whose names they are fought?</p>
<h2>AI in war</h2>
<p>AI is having an impact at all levels of war, from “intelligence, surveillance and reconnaissance” support, like the IDF’s Habsora system, through to “lethal autonomous weapons systems” that can choose and attack targets <a href="https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems">without human intervention</a>.</p>
<p>These systems have the potential to reshape the character of war, making it easier to enter into a conflict. As complex and distributed systems, they may also make it more difficult to signal one’s intentions – or interpret those of an adversary – in the context of an escalating conflict.</p>
<p>To this end, AI can <a href="https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning">contribute to mis- or disinformation</a>, creating and amplifying dangerous misunderstandings in times of war. </p>
<p>AI systems may increase the human tendency to trust suggestions from machines (this is highlighted by the Habsora system, named after the infallible word of God), opening up uncertainty over <a href="https://www.tandfonline.com/doi/abs/10.1080/15027570.2018.1481907">how far to trust</a> autonomous systems. The boundaries of an AI system that interacts with other technologies and with people may not be clear, and there may be <a href="https://www.jstor.org/stable/j.ctv11g97wm">no way to know who or what has “authored” its outputs</a>, no matter how objective and rational they may seem.</p>
<h2>High-speed machine learning</h2>
<p>Perhaps one of the most basic and important changes we are likely to see driven by AI is an increase in the speed of warfare. This may change how we understand <a href="https://www.rand.org/pubs/research_reports/RR2797.html">military deterrence</a>, which assumes humans are the primary actors and sources of intelligence and interaction in war.</p>
<p>Militaries and soldiers frame their decision-making through what is called the “<a href="https://fhs.brage.unit.no/fhs-xmlui/bitstream/handle/11250/2683228/Boyds%20OODA%20Loop%20Necesse%20vol%205%20nr%201.pdf?sequence=1&isAllowed=y">OODA loop</a>” (for observe, orient, decide, act). A faster OODA loop can help you outmanoeuvre your enemy. The goal is to avoid slowing down decisions through excessive deliberation, and instead to match the accelerating tempo of war.</p>
<p>So the use of AI is potentially justified on the basis it can interpret and synthesise huge amounts of data, processing it and delivering outputs at rates that far surpass human cognition. </p>
<p>But where is the space for ethical deliberation in an increasingly fast and data-centric OODA loop cycle happening at a safe distance from battle?</p>
<p>Israel’s targeting software is an example of this acceleration. A former head of the IDF has <a href="https://www.ynetnews.com/magazine/article/ry0uzlhu3">said</a> that human intelligence analysts might produce 50 bombing targets in Gaza each year, but the Habsora system can produce 100 targets a day, along with real-time recommendations for which ones to attack.</p>
<p>How does the system produce these targets? It does so through probabilistic reasoning offered by machine learning algorithms.</p>
<p>Machine learning algorithms learn through data. They learn by seeking patterns in huge piles of data, and their success is contingent on the data’s quality and quantity. They make recommendations based on probabilities. </p>
<p>The probabilities are based on pattern-matching. If a person has enough similarities to other people labelled as an enemy combatant, they too may be labelled a combatant themselves. </p>
<h2>The problem of AI enabled targeting at a distance</h2>
<p>Some claim machine learning enables <a href="https://philpapers.org/rec/ARKTCF">greater precision in targeting</a>, which makes it easier to avoid harming innocent people and using a proportional amount of force. However, the idea of more precise targeting of airstrikes has not been successful in the past, as the high toll of <a href="https://airwars.org/">declared and undeclared civilian casualties</a> from the global war on terror shows. </p>
<p>Moreover, the difference between a combatant and a civilian is <a href="https://philpapers.org/rec/WILSAU">rarely self-evident</a>. Even humans frequently cannot tell who is and is not a combatant.</p>
<p>Technology does not change this fundamental truth. Often social categories and concepts are not objective, but are contested or specific to time and place. But computer vision together with algorithms are more effective in predictable environments where concepts are objective, reasonably stable, and internally consistent. </p>
<h2>Will AI make war worse?</h2>
<p>We live in a time of <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-8497.2005.00393.x">unjust wars</a> and military occupations, egregious <a href="https://www.defence.gov.au/about/reviews-inquiries/afghanistan-inquiry">violations of the rules of engagement</a>, and an incipient <a href="https://www.nytimes.com/2023/03/25/world/asia/asia-china-military-war.html">arms race</a> in the face of US–China rivalry. In this context, the inclusion of AI in war may add new complexities that exacerbate, rather than prevent, harm. </p>
<p>AI systems make it easier for actors in war to <a href="https://doi.org/10.48550/arXiv.1802.07228">remain anonymous</a>, and can render invisible the source of violence or the decisions which lead to it. In turn, we may see increasing disconnection between militaries, soldiers, and civilians, and the wars being fought in the name of the nation they serve.</p>
<p>And as AI grows more common in war, militaries will develop countermeasures to undermine it, creating a loop of escalating militarisation. </p>
<h2>What now?</h2>
<p>Can we control AI systems to head off a future in which warfare is driven by increasing reliance on technology underpinned by learning algorithms? Controlling AI development in any area, particularly via laws and regulations, has proven difficult.</p>
<p>Many suggest we need better laws to account for systems underpinned by machine learning, but even this is not straightforward. Machine learning algorithms are <a href="https://www.technologyreview.com/2017/06/01/151447/secret-algorithms-threaten-the-rule-of-law/">difficult to regulate</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/us-military-plans-to-unleash-thousands-of-autonomous-war-robots-over-next-two-years-212444">US military plans to unleash thousands of autonomous war robots over next two years</a>
</strong>
</em>
</p>
<hr>
<p>AI-enabled weapons may program and update themselves, evading legal requirements for certainty. The engineering maxim “software is never done” implies that the law may never match the speed of technological change.</p>
<p>The quantitative act of estimating likely numbers of civilian deaths in advance, which the Habsora system does, does not tell us much about the qualitative dimensions of targeting. Systems like Habsora in isolation cannot really tell us much about whether a strike would be ethical or legal (that is, whether it is proportionate, discriminate and necessary, among other considerations).</p>
<p>AI should support democratic ideals, not undermine them. Trust in governments, institutions, and militaries <a href="https://www.un.org/development/desa/dspd/2021/07/trust-public-institutions/">is eroding</a> and needs to be restored if we plan to apply AI across a range of military practices. We need to deploy critical ethical and political analysis to interrogate emerging technologies and their effects so any form of military violence is considered to be the last resort.</p>
<p>Until then, machine learning algorithms are best kept separate from targeting practices. Unfortunately, the world’s armies are heading in the opposite direction.</p><img src="https://counter.theconversation.com/content/219302/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bianca Baggiarini does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>AI systems will accelerate the pace of war.Bianca Baggiarini, Lecturer, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2124442023-08-30T03:18:10Z2023-08-30T03:18:10ZUS military plans to unleash thousands of autonomous war robots over next two years<p>The United States military plans to start using thousands of autonomous weapons systems in the next two years in a bid to counter China’s growing power, US Deputy Secretary of Defense Kathleen Hicks <a href="https://www.defense.gov/News/Speeches/Speech/Article/3507156/deputy-secretary-of-defense-kathleen-hicks-keynote-address-the-urgency-to-innov/">announced</a> in a speech on Monday.</p>
<p>The so-called Replicator initiative aims to work with defence and other tech companies to produce <a href="https://www.defense.gov/News/News-Stories/Article/Article/3507514/hicks-underscores-us-innovation-in-unveiling-strategy-to-counter-chinas-militar/">high volumes of affordable systems</a> for all branches of the military.</p>
<p>Military systems capable of various degrees of independent operation have become increasingly common over the past decade or so. But the scale and scope of the US announcement makes clear the future of conflict has changed: the age of warfighting robots is upon us. </p>
<h2>An idea whose time has come</h2>
<p>Over the past decade, there has been considerable development of advanced robotic systems for military purposes. Many of these have been based on modifying commercial technology, which itself has become more capable, cheaper and more widely available. </p>
<p>More recently, the focus has shifted onto experimenting with how to best use these in combat. Russia’s war in Ukraine has demonstrated that the technology is ready for real-world deployment. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ukraine-war-drones-are-changing-the-conflict-both-on-the-frontline-and-beyond-211460">Ukraine war: drones are changing the conflict – both on the frontline and beyond</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://warontherocks.com/2022/04/loitering-munitions-in-ukraine-and-beyond/">Loitering munitions</a>, a form of robot air vehicle, have been widely used to find and attack armoured vehicles and artillery. Ukrainian naval attack drones <a href="https://www.businessinsider.com/ukraine-sea-drones-paralyzed-russia-black-sea-fleet-spy-chief-2023-8">have paralysed</a> Russia’s Black Sea fleet, forcing their crewed warships to stay in port. </p>
<p>Military robots are an idea whose time has come.</p>
<h2>Robots everywhere</h2>
<p>In her speech, Hicks talked of a perceived urgent need to change how wars are fought. She <a href="https://www.defense.gov/News/Speeches/Speech/Article/3507156/deputy-secretary-of-defense-kathleen-hicks-keynote-address-the-urgency-to-innov/">declared</a>, in somewhat impenetrable Pentagon-speak, that the new Replicator program would </p>
<blockquote>
<p>field attritable autonomous systems at scale of multiple thousands, in multiple domains, within the next 18 to 24 months. </p>
</blockquote>
<p>Decoding this, “autonomous” means a robot that can carry out complex military missions without human intervention. </p>
<p>“Attritable” means the robot is cheap enough that it can be placed at risk and lost if the mission is of high priority. Such a robot is not quite designed to be disposable, but it would be reasonably affordable so many can be bought and combat losses replaced. </p>
<p>Finally, “multiple domains” means robots on land, at sea, in the air and in space. In short, robots everywhere for all kinds of tasks.</p>
<h2>The robot mission</h2>
<p>For <a href="https://www.cbsnews.com/news/pentagon-reviews-say-china-poses-greatest-security-challenge-to-u-s-while-russia-is-acute-threat/">the US military</a>, Russia is an “acute threat” but China is the “pacing challenge” against which to benchmark its military capabilities. </p>
<p>China’s People’s Liberation Army is seen as having a significant advantage in terms of “mass”: it has more people, more tanks, more ships, more missiles and so on. The US may have better-quality equipment, but China wins on quantity. </p>
<p>By quickly building thousands of “attritable autonomous systems”, the Replicator program will now give the US the numbers considered necessary to win future major wars. </p>
<p>The imagined future war of most concern is a hypothetical battle for Taiwan, which <a href="https://thehill.com/policy/defense/3840337-generals-memo-spurs-debate-could-china-invade-taiwan-by-2025/">some postulate</a> could soon begin. Recent <a href="https://www.thedrive.com/the-war-zone/massive-drone-swarm-over-strait-decisive-in-taiwan-conflict-wargames">tabletop wargames</a> have suggested large swarms of robots could be the decisive element for the US in defeating any major Chinese invasion. </p>
<p>However, Replicator is also looking further ahead, and aims to institutionalise mass production of robots for the long term. Hicks argues: </p>
<blockquote>
<p>We must ensure [China’s] leadership wakes up every day, considers the risks of aggression, and concludes, “today is not the day” — and not just today, but every day, between now and 2027, now and 2035, now and 2049, and beyond.</p>
</blockquote>
<h2>A brave new world?</h2>
<p>One great concern about autonomous systems is whether their use can conform to the laws of armed conflict.</p>
<p>Optimists argue robots can be carefully programmed to follow rules, and in the heat and confusion of combat they may even obey better than humans. </p>
<p>Pessimists counter by noting not all situations can be foreseen, and robots may well misunderstand and attack when they should not. They have a point. </p>
<p>Among earlier autonomous military systems, the Phalanx close-in point defence gun and the Patriot surface-to-air missile have both misperformed. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-researchers-should-not-retreat-from-battlefield-robots-they-should-engage-them-head-on-45367">AI researchers should not retreat from battlefield robots, they should engage them head-on</a>
</strong>
</em>
</p>
<hr>
<p>Used only once in combat, during the first Gulf War in 1991, the <a href="http://www.navweaps.com/index_tech/tech-103.php">Phalanx fired</a> at a chaff decoy cloud rather than countering the attacking anti-ship missile. The more modern Patriot has proven effective in shooting down attacking ballistic missiles, but also <a href="https://css.ethz.ch/en/services/digital-library/articles/article.html/976797da-7b8b-4e86-84f4-4052f394d2e1">twice shot down</a> friendly aircraft during the second Gulf War in 2003, killing their human crews.</p>
<p>Clever design may overcome such problems in future autonomous systems. However, Hicks promised a “responsible and ethical approach to AI and autonomous systems” in her speech – which suggests any system able to kill targets will still need formal authorisation from a human to do so. </p>
<h2>A global change</h2>
<p>The US may be the first nation to field large numbers of autonomous systems, but other countries will be close behind. China is an obvious candidate, with great strength in both <a href="https://news.usni.org/2023/06/26/china-looking-to-become-artificial-intelligence-global-leader-report-says">artificial intelligence</a> and <a href="https://www.aljazeera.com/news/2023/1/24/how-china-became-the-worlds-leading-exporter-of-combat-drones">combat drone production</a>.</p>
<p>However, because much of the technology behind autonomous military drones has been developed for civilian purposes, it is widely available and relatively cheap. Autonomous military systems are not just for the great powers, but could also soon be fielded by many middle and smaller powers. </p>
<p><a href="https://thebulletin.org/2021/05/was-a-flying-killer-robot-used-in-libya-quite-possibly/">Libya</a> and <a href="https://www.newscientist.com/article/2282656-israel-used-worlds-first-ai-guided-combat-drone-swarm-in-gaza-attacks/">Israel</a>, among others, have reportedly deployed autonomous weapons, and <a href="https://www.cnbc.com/2023/03/28/killer-drones-turkeys-growing-defense-industry-is-boosting-its-global-clout.html">Turkish-made drones</a> have proved important in the Ukraine war. </p>
<p>Australia is another country keenly interested in the possibilities of autonomous weapons. The Australian Defence Force is today building <a href="https://www.australiandefence.com.au/defence/unmanned/government-accelerates-ghost-bat-program">the MQ-28 Ghostbat</a> autonomous fast jet air vehicle, robot <a href="https://www.aumanufacturing.com.au/bae-systems-turns-m113-personnel-carriers-autonomous">mechanised armoured vehicles</a>, robot <a href="https://www.aumanufacturing.com.au/australian-army-runs-autonomous-highway-truck-convoy">logistic trucks</a> and <a href="https://breakingdefense.com/2022/05/anduril-bets-it-can-build-3-large-autonomous-subs-for-aussies-in-3-years/">robot submarines</a>, and is already using the <a href="https://www.dailytelegraph.com.au/news/national/adf-to-use-sydney-engineering-firms-unmanned-solar-powered-boats-to-patrol-seas/news-story/ba7c6dd18c405c58e71e72da68699c39">Bluebottle robot sailboat</a> for maritime border surveillance in the Timor Sea. </p>
<p>And in a move that foreshadowed the Replicator initiative, the Australian government last month called for local companies to suggest how <a href="https://www.smh.com.au/politics/federal/we-re-trailing-the-world-push-for-aussie-made-defence-drones-20230820-p5dxxu.html">they might build</a> very large numbers of military aerial drones in-country in the next few years. </p>
<p>At least one Australian company, SYPAQ, is <a href="https://www.smh.com.au/politics/federal/aussie-cardboard-drones-used-in-attack-on-russian-airfield-20230829-p5e0bv.html">already on the move</a>, sending a number of its cheap, cardboard-bodied drones to bolster Ukraine’s defences. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1695876431614456186"}"></div></p><img src="https://counter.theconversation.com/content/212444/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Layton does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The age of autonomous weapons is upon usPeter Layton, Visiting Fellow, Griffith Asia Institute, Griffith UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2072012023-06-07T22:00:06Z2023-06-07T22:00:06ZAUKUS is already trialling autonomous weapons systems – where is NZ’s policy on next-generation warfare?<p>Defence Minister Andrew Little’s <a href="https://www.theguardian.com/world/2023/mar/28/new-zealand-may-join-aukus-pacts-non-nuclear-component">recent announcement</a> that New Zealand would be “willing to explore” participation in military technology sharing – or “pillar two” – under the AUKUS security arrangement has already divided opinion.</p>
<p>Proponents <a href="https://www.lowyinstitute.org/the-interpreter/aukus-nz-win-win">have argued</a> participation will enhance New Zealand’s security and help deter China in an increasingly contested geopolitical environment. Critics <a href="https://www.scoop.co.nz/stories/PO2304/S00106/aukus-and-peace.htm">have suggested</a> it would compromise New Zealand’s antinuclear commitment, undermine diplomacy and raise the prospect of a destabilising arms race in the Pacific region.</p>
<p>But missing from the debate so far is any clear analysis of how participation in pillar two of AUKUS might infringe on <a href="https://www.beehive.govt.nz/sites/default/files/2021-11/Autonomous-Weapons-Systems-Cabinet-paper.pdf">New Zealand’s policy approach</a> to autonomous weapons systems (AWS).</p>
<p>That’s because of lack of clarity about two things: what kinds of technology sharing and development would be included under pillar two, and just what New Zealand’s policy position on AWS currently is.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1664663513636143110"}"></div></p>
<h2>What do we know about pillar two?</h2>
<p>When AUKUS was announced, the promise to equip Australia with nuclear-powered submarines naturally dominated headlines. The other focus of the partnership, however, is cooperation on “<a href="https://pmtranscripts.pmc.gov.au/sites/default/files/AUKUS-factsheet.pdf">advanced capabilities</a>”.</p>
<p>While little detail has been released publicly, these capabilities include a range of high-tech applications: undersea robotics and autonomous systems, quantum technologies, AI and autonomy, advanced cyber technologies, hypersonic and counter-hypersonic capabilities, electronic warfare, defence innovation and information sharing.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/approach-with-caution-why-nz-should-be-wary-of-buying-into-the-aukus-security-pact-203915">Approach with caution: why NZ should be wary of buying into the AUKUS security pact</a>
</strong>
</em>
</p>
<hr>
<p>In some ways, pillar two of AUKUS is more significant than pillar one. It is certainly more imminent than the submarine delivery. It may also be “of greater long-term value and more strategically challenging”, <a>according to analysis</a> by the Australian Strategic Policy Institute.</p>
<p>There are a lot of uncertainties with emerging technologies, with no way to predict how they will develop or be adopted for military purposes. They also have more wide-reaching societal and economic implications, since much of the research and development capacity sits in civilian industries and universities.</p>
<h2>AUKUS and autonomous systems</h2>
<p>Ultimately, of course, AUKUS is about competing militarily with China. It’s the “most consequential strategic competitor” of the US and its allies and partners, <a href="https://www.defenceconnect.com.au/key-enablers/12040-us-to-focus-on-collaborative-defence-innovation-with-australia-uk">according to</a> US Assistant Secretary of Defense Mara Karlin.</p>
<p>Pillar two cooperation, Karlin argues, is necessary to accelerate military innovation, enhance interoperability and integrate the “defence industrial base” across partner countries in response to the threat posed by China.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/as-australia-signs-up-for-nuclear-subs-nz-faces-hard-decisions-over-the-aukus-alliance-201946">As Australia signs up for nuclear subs, NZ faces hard decisions over the AUKUS alliance</a>
</strong>
</em>
</p>
<hr>
<p>Last month, it was <a href="https://breakingdefense.com/2023/05/the-ai-side-of-aukus-uk-reveals-ground-breaking-allied-tech-demo/?_ga=2.240332611.1589671307.1685057650-572721413.1642293259">revealed</a> Australia, the US and the UK had held a trial of AUKUS advanced capabilities, focused on AI and autonomy. According to the UK Ministry of Defence, the event succeeded in achieving several “world firsts”, including AI-enabled assets from the three countries successfully operating as a “swarm”.</p>
<p>The systems were “testing target identification capabilities”, indicating the likely lethal applications of some pillar two technologies.</p>
<h2>Where does NZ stand now?</h2>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/530465/original/file-20230606-29-buvopp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Former disarmament minister Phil Twyford.</span>
<span class="attribution"><span class="source">Getty Images</span></span>
</figcaption>
</figure>
<p>While some clarity is beginning to emerge on the technologies being explored under pillar two, New Zealand’s policy approach to these types of technologies has become increasingly murky.</p>
<p>Following advocacy by the former minister for disarmament and arms control, Phil Twyford, cabinet committed to supporting international regulations and bans on AWS in late 2021.</p>
<p>When Twyford <a href="https://www.beehive.govt.nz/release/government-commits-international-effort-ban-and-regulate-killer-robots">announced the policy</a>, he declared the emergence of lethal AWS would be “abhorrent and inconsistent with New Zealand’s interests and values”, and would have “significant implications for global peace and security”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/if-aukus-is-all-about-nuclear-submarines-how-can-it-comply-with-nuclear-non-proliferation-treaties-a-law-scholar-explains-201760">If AUKUS is all about nuclear submarines, how can it comply with nuclear non-proliferation treaties? A law scholar explains</a>
</strong>
</em>
</p>
<hr>
<p>Yet <a href="https://www.beehive.govt.nz/sites/default/files/2021-11/Autonomous-Weapons-Systems-Cabinet-paper.pdf">the cabinet paper</a> itself contained significant caveats. These were aimed at allowing for maintenance of interoperability with key defence partners, and ensuring the New Zealand tech sector could continue to pursue “the responsible development and use of AI”.</p>
<p>Twyford’s leadership on this policy position is important given the loss of his ministerial role following Chris Hipkins’ first <a href="https://www.rnz.co.nz/news/political/483394/prime-minister-chris-hipkins-reveals-cabinet-reshuffle">cabinet reshuffle</a> as prime minister. Whether the approach outlined in the 2021 cabinet paper survives his demotion is not yet clear.</p>
<p>Thus far, his successor in the disarmament and arms control role, Nanaia Mahuta, has made no statements on AWS policy.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/530464/original/file-20230606-29-i0atcs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Defence Minister Andrew Little: interest in collaboration on cybersecurity, quantum computing and AI.</span>
<span class="attribution"><span class="source">Getty Images</span></span>
</figcaption>
</figure>
<h2>Interests and values</h2>
<p>Given these developments, Andrew Little’s openness to considering pillar two cooperation under AUKUS takes on an interesting complexion and raises numerous questions.</p>
<p>Some have <a href="https://www.newshub.co.nz/home/shows/2023/05/newshub-nation-political-panel-discuss-defence-force-funding-and-aukus.html">suggested</a> the defence minister has moderated his original comments on openness to pillar two, perhaps having faced some pushback from the prime minister and foreign minister. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/progress-in-detection-tech-could-render-submarines-useless-by-the-2050s-what-does-it-mean-for-the-aukus-pact-201187">Progress in detection tech could render submarines useless by the 2050s. What does it mean for the AUKUS pact?</a>
</strong>
</em>
</p>
<hr>
<p>Most recently, <a href="https://asia.nikkei.com/Politics/International-relations/Indo-Pacific/New-Zealand-interested-in-AUKUS-cooperation-in-non-nuclear-tech">Little has emphasised</a> the uncertainty around what New Zealand could offer under pillar two. But he has maintained there was an interest in collaboration on cybersecurity, quantum computing and artificial intelligence.</p>
<p>The recent tests of military AI technologies by the AUKUS partners, and the associated comments on their likely military purposes, point to the likelihood of various combinations of lethal and autonomous capabilities emerging from pillar two cooperation.</p>
<p>Before making any commitment to engaging in this part of the AUKUS arrangement, New Zealand’s political leaders need to carefully consider if these technologies are in keeping with the “interests and values” behind Phil Twyford’s initial push toward banning or regulating AWS.</p><img src="https://counter.theconversation.com/content/207201/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jeremy Moses receives funding from Royal Society of New Zealand Marsden Fund. </span></em></p><p class="fine-print"><em><span>Sian Troath receives funding from The Royal Society of New Zealand Marsden Fund.</span></em></p>While the technologies being explored under ‘pillar two’ of the AUKUS security pact are becoming clearer, New Zealand’s policy on autonomous weapons and military AI has become increasingly murky.Jeremy Moses, Associate Professor in International Relations, University of CanterburySian Troath, Postdoctoral fellow, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2046192023-04-28T04:33:57Z2023-04-28T04:33:57ZThe defence review fails to address the third revolution in warfare: artificial intelligence<figure><img src="https://images.theconversation.com/files/523372/original/file-20230428-15-gy0qd6.jpeg?ixlib=rb-1.1.0&rect=102%2C56%2C7498%2C4102&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Throughout history, war has been irrevocably changed by the advent of new technologies. Historians of war have identified several technological revolutions.</p>
<p>The first was the <a href="https://www.brown.edu/Departments/Joukowsky_Institute/courses/13things/7687.html">invention of gunpowder</a> by people in ancient China. It gave us muskets, rifles, machine guns and, eventually, all manner of explosive ordnance. It’s uncontroversial to claim gunpowder completely transformed how we fought war. </p>
<p>Then came the invention of the nuclear bomb, raising the stakes higher than ever. Wars could be ended with just a single weapon, and life as we know it could be ended by a single nuclear stockpile.</p>
<p>And now, war has – like so many other aspects of life – entered the age of automation. AI will cut through the “fog of war”, transforming where and how we fight. Small, cheap and increasingly capable uncrewed systems will replace large, expensive, crewed weapon platforms.</p>
<p>We’ve seen the beginnings of this in Ukraine, where sophisticated armed home-made drones <a href="https://www.bbc.com/news/technology-65389215">are being developed</a>, where Russia is <a href="https://www.hrw.org/news/2022/03/29/ukraine-russia-uses-banned-antipersonnel-landmines">using AI “smart” mines</a> that explode when they detect footsteps nearby, and where Ukraine successfully used autonomous “drone” boats in a major attack on the <a href="https://theconversation.com/ukraine-how-uncrewed-boats-are-changing-the-way-wars-are-fought-at-sea-201606">Russian navy at Sevastopol</a>.</p>
<p>We also see this revolution occurring in our own forces in Australia. And all of this raises the question: why has the government’s recent defence strategic review failed to seriously consider the implications of AI-enabled warfare?</p>
<h2>AI has crept into Australia’s military</h2>
<p>Australia already has a range of autonomous weapons and vessels that can be deployed in conflict. </p>
<p>Our air force expects to acquire a number of 12 metre-long uncrewed <a href="https://www.airforce.gov.au/our-work/projects-and-programs/ghost-bat">Ghost Bat</a> aircraft to ensure our very expensive F-35 <a href="https://www.airforce.gov.au/our-work/projects-and-programs/ghost-bat">fighter jets</a> aren’t made sitting ducks by advancing technologies. </p>
<p>On the sea, the defence force has been testing a new type of uncrewed surveillance vessel called <a href="https://www.minister.defence.gov.au/media-releases/2023-03-06/first-ocius-bluebottle-uncrewed-surface-vessels-adf">the Bluebottle</a>, developed by local company Ocius. And under the sea, Australia is building a prototype six metre-long Ghost Shark <a href="https://www.defence.gov.au/news-events/news/2022-12-14/ghost-shark-stealthy-game-changer">uncrewed submarine</a>. </p>
<p>It also looks set to be developing many more technologies like this in the future. The government’s <a href="https://www.theaustralian.com.au/nation/defence/3bn-accelerator-puts-war-hitech-on-fast-track/news-story/4b4cabf8e40b37ef687d30ce3ea121d0">just announced A$3.4 billion defence innovation “accelerator”</a> will aim to get cutting-edge military technologies, including hypersonic missiles, directed energy weapons and autonomous vehicles, into service sooner.</p>
<p>How then do AI and autonomy fit into our larger strategic picture?</p>
<p>The recent defence strategy review is the latest analysis of whether Australia has the necessary defence capability, posture and preparedness to defend its interests through the next decade and beyond. You’d expect AI and autonomy would be a significant concern – especially since the review recommends <a href="https://www.afr.com/politics/federal/defence-rejig-costs-budget-19b-and-rising-20230424-p5d2qw">spending a not insignificant A$19 billion</a> over the next four years. </p>
<p>Yet the review mentions autonomy only twice (both times in the context of existing weapons systems) and AI once (as one of the four pillars of the AUKUS submarine program). </p>
<h2>Countries are preparing for the third revolution</h2>
<p>Around the world, major powers have made it clear they consider AI a central component of the planet’s military future. </p>
<p>The House of Lords in the United Kingdom is holding a <a href="https://committees.parliament.uk/committee/646/ai-in-weapon-systems-committee/">public inquiry</a> into the use of AI in weapons systems. In Luxembourg, the government just hosted an <a href="https://www.laws-conference.lu/">important conference</a> on autonomous weapons. And China has announced its intention to become the world leader in AI by 2030. Its New Generation AI Development Plan <a href="https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/">proclaims</a> “AI is a strategic technology that will lead the future”, both in a military and economic sense.</p>
<p>Similarly, Russian President Vladimir Putin has <a href="https://edition.cnn.com/2017/09/01/world/putin-artificial-intelligence-will-rule-world/index.html">declared that</a> “whoever becomes the leader in this sphere will become ruler of the world” – while the United States has <a href="https://usacac.army.mil/sites/default/files/publications/17855.pdf">adopted a</a> “third offset strategy” that will invest heavily in AI, autonomy and robotics. </p>
<p>Unless we give more focus to AI in our military strategy, we risk being left fighting wars with outdated technologies. Russia saw the painful consequences of this last year, when its missile cruiser Moscova, the flagship of the Black Sea fleet, <a href="https://www.bbc.com/news/world-europe-61103927">was sunk</a> after being distracted by a drone. </p>
<h2>Future regulation</h2>
<p>Many people (including myself) hope autonomous weapons will soon be regulated. I was invited as an expert witness to an intergovernmental <a href="https://www.amnesty.org/en/latest/news/2023/02/more-than-30-countries-call-for-international-legal-controls-on-killer-robots/">meeting in Costa Rica</a> earlier this year, where 30 Latin and Central American nations called for regulation – many for the first time. </p>
<p>Regulation will hopefully ensure meaningful human control is maintained over autonomous weapon systems (although we’re yet to agree on what “meaningful control” will look like).</p>
<p>But regulation won’t make AI go away. We can still expect to see AI, and some levels of autonomy, as vital components in our defence in the near future.</p>
<p>There are instances, such as in minefield clearing, where autonomy is highly desirable. Indeed, AI will be very useful in managing the information space and in military logistics (where its use won’t be subject to the ethical challenges posed in other settings, such as when using lethal autonomous weapons).</p>
<p>At the same time, autonomy will create strategic challenges. For instance, it will change the geopolitical order alongside lowering costs and scaling forces. Turkey is, for example, becoming a <a href="https://www.aspistrategist.org.au/has-turkey-become-an-armed-drone-superpower/">major drone superpower</a>. </p>
<h2>We need to prepare</h2>
<p>Australia needs to consider how it might defend itself in an AI-enabled world, where terrorists or rogue states can launch swarms of drones against us – and where it might be impossible to determine the attacker. A review that ignores all of this leaves us woefully unprepared for the future. </p>
<p>We also need to engage more constructively in ongoing diplomatic discussions about the use of AI in warfare. Sometimes the best defence is to be found in the political arena, and not the military one.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/bet-youre-on-the-list-how-criticising-smart-weapons-got-me-banned-from-russia-185399">'Bet you're on the list': how criticising 'smart weapons' got me banned from Russia</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/204619/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh receives funding from the Australian Research Council as an ARC Laureate Fellow. He has been banned indefinitely from Russia for his outspoken criticism of Russia's use of AI weapons in Ukraine. </span></em></p>AI is going to fundamentally transform how nations wage far. By failing to address it, the defence review leaves Australia unprepared for the future of war.Toby Walsh, Professor of AI, Research Group Leader, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2019422023-04-03T18:31:34Z2023-04-03T18:31:34ZHow Russian and Iranian drone strikes further dehumanize warfare<figure><img src="https://images.theconversation.com/files/517688/original/file-20230327-26-ixfgj.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C1997%2C1370&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">An unmanned U.S. Predator drone flies over Kandahar Air Field, southern Afghanistan, on a moon-lit night several years ago. Drone strikes are now a major feature of modern warfare, including in Ukraine and Syria.</span> <span class="attribution"><span class="source">(AP Photo/Kirsty Wigglesworth)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/how-russian-and-iranian-drone-strikes-further-dehumanize-warfare" width="100%" height="400"></iframe>
<p>Along with the recent <a href="https://www.cnn.com/2023/03/24/middleeast/us-syria-drone-strike-analysis-intl/index.html">reciprocal drone strikes by Iran and the United States in Syria</a>, Russia continues to unleash its arsenal on Ukrainian <a href="https://www.nytimes.com/interactive/2022/03/23/world/europe/ukraine-civilian-attacks.html">civilian and military targets alike</a>. While the Russian armies have started using <a href="https://www.forbes.com/sites/davidaxe/2023/03/13/russias-best-tank-army-might-have-no-choice-but-to-reequip-with-60-year-old-t-62s/?sh=3fd316ec3624">outdated weapons</a>, novel technologies remain the objects of fascination on the battlefield.</p>
<p><a href="https://www.bbc.com/news/world-europe-60806151">Hypersonic missiles</a> and <a href="https://www.reuters.com/world/europe/putin-russia-pay-increased-attention-boosting-nuclear-forces-2023-02-22/">nuclear weapons</a> have understandably grabbed media attention. However, <a href="https://www.wsj.com/articles/western-supplied-air-defense-helps-ukraine-repel-russian-drone-attacks-11672750680">drone warfare</a> continues to occupy a central role in the conflict.</p>
<p>Ukraine’s not the only battlefield. Drone warfare has played a significant role in the Azerbaijan-Armenian conflict, with Armenia’s superior conventional forces being challenged by <a href="https://jamestown.org/program/tactical-reasons-behind-military-breakthrough-in-karabakh-conflict/">the kamikaze drones, strike UAVs and remotely controlled planes of Azerbaijan</a>. A world away, <a href="https://www.nytimes.com/2022/09/10/world/asia/china-taiwan-drones.html">China’s drones continue to test Taiwan’s defensive capabilities and readiness.</a></p>
<p>Drones are not the only weapons. As the global <a href="https://hcss.nl/event/summit-responsible-artificial-intelligence-military-domain-reaim-2023/">Summit on Responsible AI in the Military Domain (REAIM)</a> illustrates, there is a growing recognition that lethal autonomous weapons systems (LAWS) pose a threat that must be reckoned with.</p>
<p>Understanding this threat requires grasping the psychological, social and technological challenges they present. </p>
<h2>Killing and psychological distance</h2>
<p>Psychology is at the heart of all conflicts. Whether in terms of perceived existential or territorial threats, individuals band together in groups to make gains or avoid losses.</p>
<p>A reluctance to kill stems from our <a href="https://www.annualreviews.org/doi/abs/10.1146/annurev-psych-010213-115045">perceived humanity</a> and <a href="https://doi.org/10.1093/scan/nsq011">membership in the same community</a>. By turning <a href="https://doi.org/10.1080/17541328.2014.947794">people into statistics</a> and dehumanizing them, we further dull our moral sense.</p>
<p>LAWS remove us from the battlefield. As we distance ourselves from human suffering, <a href="https://www.ojp.gov/ncjrs/virtual-library/abstracts/killing-psychological-cost-learning-kill-war-and-society">lethal decisions become easier</a>. Research has demonstrated that distance is also associated with <a href="https://doi.org/10.1037/0033-2909.123.3.238">more antisocial behaviours</a>. When viewing potential targets from drone-like perspectives, people become <a href="https://doi.org/10.3389/fpsyg.2015.02008">morally disengaged</a>. </p>
<figure class="align-center ">
<img alt="A young man stands with out-stretched arms with a drone in the sky in front of him." src="https://images.theconversation.com/files/517682/original/file-20230327-720-mu4wn5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/517682/original/file-20230327-720-mu4wn5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/517682/original/file-20230327-720-mu4wn5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/517682/original/file-20230327-720-mu4wn5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/517682/original/file-20230327-720-mu4wn5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/517682/original/file-20230327-720-mu4wn5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/517682/original/file-20230327-720-mu4wn5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A young man flies a drone while testing it on the outskirts of Kyiv, Ukraine, in June 2022. Drones are being extensively used by Russian and Ukrainian troops.</span>
<span class="attribution"><span class="source">(AP Photo/Natacha Pisarenko)</span></span>
</figcaption>
</figure>
<h2>Autonomy, intelligence differ</h2>
<p>Many modern weapons rely on artificial intelligence (AI), but not all forms of AI are autonomous. <a href="https://doi.org/10.4324/9781003143284">Autonomy and intelligence are two distinct characteristics</a>.</p>
<p>Autonomy has to do with control over specific operations. A weapons system might gather information and identify a target autonomously, while firing decisions are left to human operators. Alternatively, a human operator might decide on a target, releasing a self-guided weapon to target and detonate autonomously.</p>
<p>Autonomous weapons are not new to warfare. In the sea, <a href="https://www.history.navy.mil/research/library/online-reading-room/title-list-alphabetically/e/evolution-of-naval-weapons.html">self-propelled torpedos and naval mines</a> have been in use since the mid-1800s. On the ground, elementary land mines have given way to <a href="https://www.lawfareblog.com/foreign-policy-essay-south-korean-sentry%E2%80%94-killer-robot-prevent-war">autonomous turrets</a>.</p>
<p>In the air, Nazi Germany wielded V1 and V2 rockets and <a href="https://defencyclopedia.com/2014/07/01/the-worlds-first-guided-missiles-v1-and-v2/">radio-controlled munitions</a>. <a href="https://www.airandspaceforces.com/PDF/MagazineArchive/Documents/2010/March%202010/0310bombs.pdf">Heat-seeking and laser-guided precisions followed by the 1960s and were used by the U.S. in Vietnam</a>. By the 1990s, <a href="https://doi.org/10.1080/00963402.2000.11456960">the era of “smart weapons”</a> was upon us, bringing with it questions of our <a href="https://doi.org/10.1145/65971.65973">ethical obligations</a>.</p>
<p>Contemporary LAWS have been framed as the <a href="https://www.foreignaffairs.com/articles/united-states/2017-12-20/why-troops-dont-trust-drones">“natural evolutionary path”</a> of warfare. We can draw parallels between single drones and smart weapons, although <a href="https://apps.dtic.mil/sti/pdfs/AD1039921.pdf">drone swarms represent a new kind of weapon</a>. </p>
<figure class="align-center ">
<img alt="Dozens of drones fly over a line of trees." src="https://images.theconversation.com/files/517700/original/file-20230327-22-n804jx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/517700/original/file-20230327-22-n804jx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/517700/original/file-20230327-22-n804jx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/517700/original/file-20230327-22-n804jx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/517700/original/file-20230327-22-n804jx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/517700/original/file-20230327-22-n804jx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/517700/original/file-20230327-22-n804jx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Drone swarms are a new and lethal weapon of war.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>The ability to co-ordinate their actions gives them the potential to overwhelm human forces. The degree of co-ordination, in fact, requires a higher level of autonomy. If the swarm is sufficiently large, a single human operator could not hope to maintain sufficient situational awareness to control it. </p>
<p>By ceding lethal decisions to LAWS, their accuracy and reliability become paramount concerns.</p>
<h2>Accuracy and accountability</h2>
<p>The potential for <a href="https://behorizon.org/killed-by-algorithms-do-autonomous-weapons-reduce-risks/">reduced human error is often used to recommend LAWS</a>. While an actuarial approach to AI ethics is hardly the best or only way to <a href="https://doi.org/10.4324/9781003143284">make moral decisions about AI</a>, reliable data is essential to judge the accuracy and improve the operations of LAWS. However, it is often lacking. </p>
<p>A review of U.S. drone strikes over a 15-year period <a href="https://hri.law.columbia.edu/sites/default/files/publications/out_of_the_shadows.pdf">suggested that only about 20 per cent of more than 700 strikes were acknowledged by the government, with an estimated 400 civilian casualties</a>. </p>
<p>In the early stages of the Russian invasion of Ukraine, reports suggested that <a href="https://www.usatoday.com/story/news/world/2022/10/24/ukraine-russia-war-live-updates/10586619002/">Ukrainians destroyed about 85 per cent of the drones launched against them</a>. In the most recent attacks, they have destroyed <a href="https://www.reuters.com/world/europe/three-killed-russian-drone-strike-kyiv-region-officials-2023-03-22/">more than 75 per cent of the drones</a>.</p>
<p>These statistics might suggest drones aren’t particularly effective. But the minimal cost and large numbers of drones mean that even if a small proportion of the weapons are successful, the damage and casualties can be significant.</p>
<p>When assigning responsibility, we have to consider who manufactures these weapons. They are not always homegrown. Drones used by Russia in the Ukrainian conflict <a href="https://www.nytimes.com/2023/03/21/business/russia-china-drones-ukraine-war.html">hail from China</a> <a href="https://www.theguardian.com/world/2023/feb/12/iran-uses-boats-state-airline-smuggle-drones-into-russia">and Iran</a>. </p>
<p>Inside these drones, <a href="https://www.washingtonpost.com/technology/2022/02/11/russian-military-drones-ukraine/">many parts come from western manufacturers</a>. Understanding responsibility and accountability in conflicts requires that we consider the international supply chains that enable LAWS. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1638125819863203845"}"></div></p>
<h2>Can LAWS be outlawed?</h2>
<p>Whether LAWS represent a <a href="https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and">unique threat to human rights</a> that <a href="https://news.un.org/en/story/2019/03/1035381">must be banned</a> — like <a href="https://www.un.org/disarmament/anti-personnel-landmines-convention/">landmines</a> — or otherwise controlled by international laws, there is widespread agreement that we must re-evaluate existing approaches to regulation.</p>
<p>REAIM’s work is not alone in attempting to regulate LAWS. The United Nations, <a href="https://reachingcriticalwill.org/disarmament-fora/ccw/2022/laws/documents">multilateral proposals</a> and countries like <a href="https://media.defense.gov/2023/Jan/25/2003149928/-1/-1/0/DOD-DIRECTIVE-3000.09-AUTONOMY-IN-WEAPON-SYSTEMS.PDF">the U.S.</a> <a href="https://lop.parl.ca/sites/PublicWebsite/default/en_CA/ResearchPublications/201955E#a7">and Canada</a> have all developed, proposed or are reviewing the sufficiency of existing standards. </p>
<figure class="align-center ">
<img alt="Two soldiers dressed in battle fatigues look at a screen in an outdoor shelter." src="https://images.theconversation.com/files/517684/original/file-20230327-18-c3vwv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/517684/original/file-20230327-18-c3vwv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=392&fit=crop&dpr=1 600w, https://images.theconversation.com/files/517684/original/file-20230327-18-c3vwv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=392&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/517684/original/file-20230327-18-c3vwv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=392&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/517684/original/file-20230327-18-c3vwv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=493&fit=crop&dpr=1 754w, https://images.theconversation.com/files/517684/original/file-20230327-18-c3vwv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=493&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/517684/original/file-20230327-18-c3vwv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=493&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Ukrainian servicemen correct artillery fire by drone at the frontline near Kharkiv, Ukraine, in July 2022.</span>
<span class="attribution"><span class="source">(AP Photo/Evgeniy Maloletka)</span></span>
</figcaption>
</figure>
<p>These efforts face an array of practical issues. In many cases, the principles are framed as best practices and viewed as voluntary rather than being enforceable. States might also be <a href="https://www.ploughshares.ca/publications/no-canadian-leadership-on-autonomous-weapons">reluctant in order to ensure consistency with their allies</a>. </p>
<p>Treaties and regulations also create <a href="https://doi.org/10.1016/j.obhdp.2012.11.003">social dilemmas</a> — just as they do when contemplating <a href="https://ieeexplore.ieee.org/document/9139644">cyberweapons</a>, nations must decide whether they adhere to the rules while others develop superior capabilities.</p>
<p>Even if LAWS are <a href="https://foreignpolicy.com/2020/10/14/ai-drones-swarms-killer-robots-partial-ban-on-autonomous-weapons-would-make-everyone-safer/">wholly or partially banned</a>, there is still considerable room for interpretation and rationalization.</p>
<p>In July 2022, Russia said responsibility lies with their operators and that LAWS can <a href="https://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2022/gge/documents/Russia_July2022.pdf">“reduce the risk of intentional strikes against civilians and civilian facilities” and support “missions of maintaining or restoring international peace and security … [in] compliance with international law.”</a> These are hollow statements made by a hollow regime.</p>
<p>No matter how elegant the regulatory framework nor how straightforward the principles, adversarial nations are unlikely to abide by international agreements — especially knowing weapons like drones make it easier for soldiers psychologically removed from the realities of the battlefield to kill others.</p>
<p>As Russia’s war in Ukraine illustrates, by reframing conflicts, the use of LAWS can always be justified. Their ability to desensitize their users from the act of killing, however, must not be.</p><img src="https://counter.theconversation.com/content/201942/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jordan Richard Schoenherr has previously received funding from Army Research Laboratory and has served as a visiting scholar at the United States Military Academy and has worked as a consultant for the Canadian Department of National Defence. </span></em></p>As Russia’s war in Ukraine illustrates, the use of lethal automated weapons, or LAWS, can always be justified. Their ability to desensitize their users from the act of killing, however, shouldn’t be.Jordan Richard Schoenherr, Assistant Professor, Psychology, Concordia UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1987252023-02-21T13:24:17Z2023-02-21T13:24:17ZWar in Ukraine accelerates global drive toward killer robots<figure><img src="https://images.theconversation.com/files/510915/original/file-20230217-593-z3je8t.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4021%2C2924&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It wouldn't take much to turn this remotely operated mobile machine gun into an autonomous killer robot.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Marine_Corps_Warfighting_Laboratory_MAGTAF_Integrated_Experiment_(MCWL)_160709-M-OB268-165.jpg">Pfc. Rhita Daniel, U.S. Marine Corps</a></span></figcaption></figure><p>The U.S. military is intensifying its commitment to the development and use of autonomous weapons, as confirmed by an update to a <a href="https://www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/">Department of Defense directive</a>. The update, released Jan. 25, 2023, is the first in a decade to focus on artificial intelligence autonomous weapons. It follows a related <a href="https://www.nato.int/cps/en/natohq/official_texts_208376.htm">implementation plan</a> released by NATO on Oct. 13, 2022, that is aimed at preserving the alliance’s “technological edge” in what are sometimes called “killer robots.” </p>
<p>Both announcements reflect a crucial lesson militaries around the world have learned from recent combat operations in <a href="https://www.pbs.org/newshour/world/drone-advances-amid-war-in-ukraine-could-bring-fighting-robots-to-front-lines#:%7E:text=Utah%2Dbased%20Fortem%20Technologies%20has,them%20%E2%80%94%20all%20without%20human%20assistance.">Ukraine</a> and <a href="https://foreignpolicy.com/2021/03/30/army-pentagon-nagorno-karabakh-drones/">Nagorno-Karabakh</a>: Weaponized artificial intelligence is the future of warfare.</p>
<p>“We know that commanders are seeing a military value in loitering munitions in Ukraine,” Richard Moyes, director of <a href="https://article36.org/">Article 36</a>, a humanitarian organization focused on reducing harm from weapons, told me in an interview. These weapons, which are a cross between a bomb and a drone, can hover for extended periods while waiting for a target. For now, such semi-autonomous missiles are generally being operated with significant human control over key decisions, he said. </p>
<h2>Pressure of war</h2>
<p>But as casualties mount in Ukraine, so does the pressure to achieve decisive battlefield advantages with fully autonomous weapons – robots that can choose, hunt down and attack their targets all on their own, without needing any human supervision.</p>
<p>This month, a key Russian manufacturer <a href="https://www.defenseone.com/technology/2023/01/russian-robot-maker-working-bot-target-abrams-leopard-tanks/382288/">announced plans</a> to develop a new combat version of its Marker reconnaissance robot, an uncrewed ground vehicle, to augment existing forces in Ukraine. Fully autonomous drones are already being used to <a href="https://www.forbes.com/sites/katyasoldak/2023/01/27/friday-january-27-russias-war-on-ukraine-daily-news-and-information-from-ukraine/">defend Ukrainian energy facilities</a> from other drones. Wahid Nawabi, CEO of the U.S. defense contractor that manufactures the semi-autonomous <a href="https://www.avinc.com/tms/switchblade">Switchblade drone</a>, said the technology is <a href="https://apnews.com/article/russia-ukraine-war-drone-advances-6591dc69a4bf2081dcdd265e1c986203">already within reach</a> to convert these weapons to become fully autonomous. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1446461845070549008"}"></div></p>
<p>Mykhailo Fedorov, Ukraine’s digital transformation minister, has argued that fully autonomous weapons are the war’s “<a href="https://abcnews.go.com/Technology/wireStory/drone-advances-ukraine-bring-dawn-killer-robots-96112651">logical and inevitable next step</a>” and recently said that soldiers might see them on the battlefield in the next six months. </p>
<p>Proponents of fully autonomous weapons systems <a href="https://news.northeastern.edu/2019/11/15/autonomous-weapons-systems-the-utilize-artificial-intelligence-are-changing-the-nature-of-warfare-but-theres-a-problem/#_ga=2.7414138.976428111.1676666580-169995920.1676666580">argue that the technology will keep soldiers out of harm’s way</a> by keeping them off the battlefield. They will also allow for military decisions to be made at superhuman speed, allowing for radically improved defensive capabilities. </p>
<p>Currently, semi-autonomous weapons, like loitering munitions that track and detonate themselves on targets, require a “human in the loop.” They can recommend actions but require their operators to initiate them. </p>
<p>By contrast, fully autonomous drones, like the so-called “<a href="https://fortemtech.com/products/dronehunter-f700/">drone hunters</a>” now <a href="https://u24.gov.ua/news/shahed_hunters_defenders">deployed in Ukraine</a>, can track and disable incoming unmanned aerial vehicles day and night, with no need for operator intervention and faster than human-controlled weapons systems. </p>
<h2>Calling for a timeout</h2>
<p>Critics like <a href="https://www.stopkillerrobots.org/stop-killer-robots/facts-about-autonomous-weapons/">The Campaign to Stop Killer Robots</a> have been advocating for more than a decade to ban research and development of autonomous weapons systems. They point to a future where autonomous weapons systems are designed specifically to target humans, not just vehicles, infrastructure and other weapons. They argue that wartime decisions over life and death must remain in human hands. Turning them over to an algorithm amounts to the ultimate form of <a href="https://www.stopkillerrobots.org/stop-killer-robots/digital-dehumanisation/">digital dehumanization</a>.</p>
<p>Together with <a href="https://www.hrw.org/topic/arms/killer-robots">Human Rights Watch</a>, The Campaign to Stop Killer Robots argues that autonomous weapons systems lack the human judgment necessary to distinguish between civilians and legitimate military targets. They also lower the threshold to war by reducing the perceived risks, and they erode meaningful human control over what happens on the battlefield. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a soldier crouches on the ground peering into a black box as to small projectiles with wings are launched from tubes on either side of him" src="https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=662&fit=crop&dpr=1 600w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=662&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=662&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=831&fit=crop&dpr=1 754w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=831&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=831&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This composite image shows a ‘Switchblade’ loitering munition drone launching from a tube and extending its folded wings.</span>
<span class="attribution"><a class="source" href="https://madsciblog.tradoc.army.mil/wp-content/uploads/2021/06/Switchblade.jpg">U.S. Army AMRDEC Public Affairs</a></span>
</figcaption>
</figure>
<p>The organizations argue that the militaries <a href="https://research.northeastern.edu/autonomous-weapons-systems-the-utilize-artificial-intelligence-are-changing-the-nature-of-warfare-but-theres-a-problem-2/#:%7E:text=They%20found%20that%20there%20are,dollars%20into%20this%20arms%20race.">investing most heavily</a> in autonomous weapons systems, including the U.S., Russia, China, South Korea and the European Union, are launching the world into a costly and destabilizing new arms race. One consequence could be this dangerous new technology falling into the <a href="https://www.brookings.edu/wp-content/uploads/2021/11/FP_20211122_ai_nonstate_actors_kreps.pdf">hands of terrorists and others outside of government control</a>.</p>
<p>The updated Department of Defense directive tries to address some of the key concerns. It declares that the U.S. will use autonomous weapons systems with “<a href="https://www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/">appropriate levels of human judgment over the use of force</a>.” Human Rights Watch <a href="https://www.hrw.org/news/2023/02/14/review-2023-us-policy-autonomy-weapons-systems">issued a statement</a> saying that the new directive fails to make clear what the phrase “appropriate level” means and doesn’t establish guidelines for who should determine it.</p>
<p>But as Gregory Allen, an expert from the national defense and international relations think tank <a href="https://www.csis.org/">Center for Strategic and International Studies</a>, argues, this language <a href="https://www.forbes.com/sites/davidhambling/2023/01/31/what-is-the-pentagons-updated-policy-on-killer-robots/">establishes a lower threshold</a> than the “meaningful human control” demanded by critics. The Defense Department’s wording, he points out, allows for the possibility that in certain cases, such as with surveillance aircraft, the level of human control considered appropriate “may be little to none.” </p>
<p>The updated directive also includes language promising ethical use of autonomous weapons systems, specifically by establishing a system of oversight for developing and employing the technology, and by insisting that the weapons will be used in accordance with existing international laws of war. But Article 36’s Moyes noted that international law currently does not provide an adequate framework for understanding, much less regulating, the concept of weapon autonomy. </p>
<p>The current legal framework does not make it clear, for instance, that commanders are responsible for understanding what will trigger the systems that they use, or that they must limit the area and time over which those systems will operate. “The danger is that there is not a bright line between where we are now and where we have accepted the unacceptable,” said Moyes.</p>
<h2>Impossible balance?</h2>
<p>The Pentagon’s update demonstrates a simultaneous commitment to deploying autonomous weapons systems and to complying with international humanitarian law. How the U.S. will balance these commitments, and if such a balance is even possible, remains to be seen. </p>
<p>The International Committee of the Red Cross, the custodian of international humanitarian law, insists that the legal obligations of commanders and operators “<a href="https://www.icrc.org/en/document/reflections-70-years-geneva-conventions-and-challenges-ahead">cannot be transferred to a machine, algorithm or weapon system</a>.” Right now, human beings are held responsible for protecting civilians and limiting combat damage by making sure the use of force is proportional to military objectives.</p>
<p>If and when artificially intelligent weapons are deployed on the battlefield, who should be held responsible when needless civilian deaths occur? There isn’t a clear answer to that very important question.</p><img src="https://counter.theconversation.com/content/198725/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>I am not connected to Article 36 in any capacity, nor have I received any funding from them. I did write a short opinion/policy piece on AWS that was posted on their website.</span></em></p>The technology exists to build autonomous weapons. How well they would work and whether they could be adequately controlled are unknown. The Ukraine war has only turned up the pressure.James Dawes, Professor of English, Macalester CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1921702022-10-16T19:02:23Z2022-10-16T19:02:23Z‘Killer robots’ will be nothing like the movies show – here’s where the real threats lie<figure><img src="https://images.theconversation.com/files/489521/original/file-20221013-12-lm966h.jpg?ixlib=rb-1.1.0&rect=143%2C201%2C1386%2C862&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Ghost Robotics Vision 60 Q-UGV.</span> <span class="attribution"><a class="source" href="https://www.dvidshub.net/image/7351259/ghost-robotics-vision-60-q-ugv-demo">US Space Force photo by Senior Airman Samuel Becker</a></span></figcaption></figure><p>You might suppose Hollywood is good at predicting the future. Indeed, Robert Wallace, head of the CIA’s Office of Technical Service and the US equivalent of MI6’s fictional Q, has recounted how Russian spies <a href="https://www.popularmechanics.com/military/a12043/4267549/">would watch the latest Bond movie</a> to see what technologies might be coming their way.</p>
<p>Hollywood’s continuing obsession with killer robots might therefore be of significant concern. The newest such movie is Apple TV’s forthcoming <a href="https://www.thewrap.com/florence-pugh-dolly-movie-murderous-sex-robot-apple-tv-plus/">sex robot courtroom drama Dolly</a>.</p>
<p>I never thought I’d write the phrase “sex robot courtroom drama”, but there you go. Based on a <a href="https://apex-magazine.com/short-fiction/dolly/">2011 short story</a> by Elizabeth Bear, the plot concerns a billionaire killed by a sex robot that then asks for a lawyer to defend its murderous actions.</p>
<h2>The real killer robots</h2>
<p>Dolly is the latest in a long line of movies featuring killer robots – including HAL in Kubrick’s 2001: A Space Odyssey, and Arnold Schwarzenegger’s T-800 robot in the Terminator series. Indeed, conflict between robots and humans was at the centre of the very first feature-length science fiction film, Fritz Lang’s 1927 classic <a href="https://www.britannica.com/topic/Metropolis-film-1927">Metropolis</a>.</p>
<p>But almost all these movies get it wrong. Killer robots won’t be sentient humanoid robots with evil intent. This might make for a dramatic storyline and a box office success, but such technologies are many decades, if not centuries, away.</p>
<p>Indeed, contrary to recent fears, robots may never be sentient.</p>
<p>It’s much simpler technologies we should be worrying about. And these technologies are starting to turn up on the battlefield today in places like Ukraine and <a href="https://www.militarystrategymagazine.com/article/drones-in-the-nagorno-karabakh-war-analyzing-the-data/">Nagorno-Karabakh</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/drones-over-ukraine-fears-of-russian-killer-robots-have-failed-to-materialise-180244">Drones over Ukraine: fears of Russian 'killer robots' have failed to materialise</a>
</strong>
</em>
</p>
<hr>
<h2>A war transformed</h2>
<p>Movies that feature much simpler armed drones, like Angel has Fallen (2019) and Eye in the Sky (2015), paint perhaps the most accurate picture of <a href="https://theconversation.com/eye-in-the-sky-movie-gives-a-real-insight-into-the-future-of-warfare-56684">the real future of killer robots</a>. </p>
<p>On the nightly TV news, we see how modern warfare is being transformed by ever-more autonomous drones, tanks, ships and submarines. These robots are only a little more sophisticated than those you can buy in your local hobby store. </p>
<p>And increasingly, the decisions to identify, track and destroy targets are being handed over to their algorithms. </p>
<p>This is taking the world to a dangerous place, with a host of moral, legal and technical problems. Such weapons will, for example, further upset our troubled geopolitical situation. We already see <a href="https://www.forbes.com/sites/amirhusain/2022/06/30/turkey-builds-a-hyperwar-capable-military/?sh=1500c4b855e1">Turkey emerging as a major drone power</a>.</p>
<p>And such weapons cross a moral red line into a terrible and terrifying world where unaccountable machines decide who lives and who dies. </p>
<p>Robot manufacturers are, however, starting to push back against this future.</p>
<h2>A pledge not to weaponise</h2>
<p>Last week, six leading robotics companies pledged they would <a href="https://www.theguardian.com/technology/2022/oct/07/killer-robots-companies-pledge-no-weapons">never weaponise their robot platforms</a>. The companies include Boston Dynamics, which makes the Atlas humanoid robot, which can <a href="https://youtu.be/knoOXBLFQ-s">perform an impressive backflip</a>, and the Spot robot dog, which looks like it’s <a href="https://youtu.be/wlkCQXHEgjA">straight out of the Black Mirror TV series</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1578400002056953858"}"></div></p>
<p>This isn’t the first time robotics companies have spoken out about this worrying future. Five years ago, I organised <a href="https://newsroom.unsw.edu.au/news/science-tech/world%E2%80%99s-tech-leaders-urge-un-ban-killer-robots">an open letter</a> signed by Elon Musk and more than 100 founders of other AI and robot companies calling for the United Nations to regulate the use of killer robots. The letter even knocked the Pope into third place for a <a href="https://newsroom.unsw.edu.au/news/science-tech/unsws-toby-walsh-voted-runner-global-award">global disarmament award</a>.</p>
<p>However, the fact that leading robotics companies are pledging not to weaponise their robot platforms is more virtue signalling than anything else.</p>
<p>We have, for example, already seen <a href="https://www.vice.com/en/article/m7gv33/robot-dog-not-so-cute-with-submachine-gun-strapped-to-its-back">third parties mount guns</a> on clones of Boston Dynamics’ Spot robot dog. And such modified robots have proven effective in action. Iran’s top nuclear scientist was <a href="https://www.nytimes.com/2021/09/18/world/middleeast/iran-nuclear-fakhrizadeh-assassination-israel.html">assassinated by Israeli agents</a> using a robot machine gun in 2020.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lethal-autonomous-weapons-and-world-war-iii-its-not-too-late-to-stop-the-rise-of-killer-robots-165822">Lethal autonomous weapons and World War III: it's not too late to stop the rise of 'killer robots'</a>
</strong>
</em>
</p>
<hr>
<h2>Collective action to safeguard our future</h2>
<p>The only way we can safeguard against this terrifying future is if nations collectively take action, as they have with chemical weapons, biological weapons and even nuclear weapons.</p>
<p>Such regulation won’t be perfect, just as the regulation of chemical weapons isn’t perfect. But it will prevent arms companies from openly selling such weapons and thus their proliferation. </p>
<p>Therefore, it’s even more important than a pledge from robotics companies to see the UN Human Rights council <a href="https://www.ohchr.org/en/news/2022/10/human-rights-council-adopts-six-resolutions-appoints-special-rapporteur-situation">has recently unanimously decided</a> to explore the human rights implications of new and emerging technologies like autonomous weapons. </p>
<p>Several dozen nations have already called for the UN to regulate killer robots. The European Parliament, the African Union, the UN Secretary General, Nobel peace laureates, church leaders, politicians and thousands of AI and robotics researchers like myself have all called for regulation. </p>
<p>Australian is not a country that has, so far, supported these calls. But if you want to avoid this Hollywood future, you may want to take it up with your political representative next time you see them.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/new-zealand-could-take-a-global-lead-in-controlling-the-development-of-killer-robots-so-why-isnt-it-166168">New Zealand could take a global lead in controlling the development of 'killer robots' — so why isn't it?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/192170/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The sentient, murderous humanoid robot is a complete fiction, and may never become reality. But that doesn’t mean we’re safe from autonomous weapons – they are already here.Toby Walsh, Professor of AI at UNSW, Research Group Leader, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1885202022-08-21T20:03:06Z2022-08-21T20:03:06ZAustralia’s pursuit of ‘killer robots’ could put the trans-Tasman alliance with New Zealand on shaky ground<figure><img src="https://images.theconversation.com/files/479984/original/file-20220818-546-nyccc.jpg?ixlib=rb-1.1.0&rect=0%2C242%2C8986%2C4944&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Getty Images</span></span></figcaption></figure><p>Australia’s recently <a href="https://www.defence.gov.au/about/reviews-inquiries/defence-strategic-review">announced</a> defence review, intended to be the most thorough in almost four decades, will give us a good idea of how Australia sees its role in an increasingly tense strategic environment.</p>
<p>As New Zealand’s only formal military ally, Australia’s defence choices will have significant implications, both for New Zealand and regional geopolitics.</p>
<p>There are several areas of contention in the trans-Tasman relationship. One is Australia’s pursuit of nuclear-powered submarines, which clashes with New Zealand’s anti-nuclear stance. Another lies in the two countries’ diverging approaches to autonomous weapons systems (AWS), colloquially known as “killer robots”. </p>
<figure class="align-center ">
<img alt="Boeing Australia's autonomous 'loyal wingman' aircraft" src="https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Boeing Australia is developing autonomous ‘loyal wingman’ aircraft to complement manned aircraft.</span>
<span class="attribution"><a class="source" href="https://www.flightglobal.com/defence/boeing-australia-pushes-loyal-wingman-maiden-flight-to-2021/141691.article">Boeing</a>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>In general, AWS are <a href="https://www.beehive.govt.nz/sites/default/files/2021-11/Autonomous-Weapons-Systems-Cabinet-paper.pdf">considered</a> to be “weapons systems that, once activated, can select and engage targets without further human intervention”. There is, however, no internationally agreed definition.</p>
<p>New Zealand is involved with international attempts to ban and regulate AWS. It seeks a ban on systems that “are not sufficiently predictable or controllable to meet legal or ethical requirements” and advocates for “rules or limits to govern the development and use of AWS”. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1424978867614228485"}"></div></p>
<p>If this seems vague to you, it should. This ambiguity in definition makes it difficult to determine which systems New Zealand seeks to ban or regulate.</p>
<h2>Australia’s prioritisation of AWS</h2>
<p>Australia, meanwhile, has been developing what it more commonly refers to as robotics and autonomous systems (RAS) with <a href="https://www.tandfonline.com/doi/full/10.1080/10357718.2022.2095615">gusto</a>. Since 2016, Australia has identified RAS as a priority area of development and substantially increased <a href="https://www.dst.defence.gov.au/nextgentechfund">funding</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/new-zealand-could-take-a-global-lead-in-controlling-the-development-of-killer-robots-so-why-isnt-it-166168">New Zealand could take a global lead in controlling the development of 'killer robots' — so why isn't it?</a>
</strong>
</em>
</p>
<hr>
<p>The Australian <a href="https://www.navy.gov.au/sites/default/files/documents/RAN_WIN_RASAI_Strategy_2040f2_hi.pdf">navy</a>, <a href="https://researchcentre.army.gov.au/sites/default/files/2020-03/robototic_autonomous_systems_strategy.pdf">army</a> and defence force (<a href="https://tasdcrc.com.au/wp-content/uploads/2020/12/ADF-Concept-Robotics.pdf">ADF</a>) have each released concept documents since 2018, discussing RAS and their associated benefits, risks, challenges and opportunities.</p>
<p>Key systems Australia is pursuing include the autonomous aircraft <a href="https://news.defence.gov.au/service/introducing-ghost-bat">Ghost Bat</a>, three different kinds of <a href="https://www.australiandefence.com.au/defence/sea/navy-s-uncrewed-undersea-plans">extra-large underwater autonomous vehicles</a> and <a href="https://www.minister.defence.gov.au/minister/melissa-price/media-releases/autonomous-truck-project-passes-major-milestone">autonomous trucks</a>.</p>
<h2>Why is Australia seeking to develop these technologies?</h2>
<p>The short answer is three-fold: seeking military advantage, saving lives and economics.</p>
<p>Australia and its allies and partners, particularly the US, are <a href="https://www.ussc.edu.au/analysis/us-china-technology-competition-and-what-it-means-for-australia">fearful</a> of losing the technological superiority they have long held over rivals such as China. </p>
<p>Large military capabilities, like nuclear-powered submarines, take both time and money to acquire. Australia is further limited in what it can do by the size of its defence force. RAS are seen as a way to potentially maintain advantage, and to do more with less.</p>
<p>RAS are also seen as a way to save lives. A <a href="https://media.defense.gov/2020/Nov/23/2002540369/-1/-1/1/WYATT.PDF">survey</a> of Australian military personnel found they considered reduction of harm and injury to defence personnel, allied personnel and civilians among the most important potential benefits of RAS. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616">UN fails to agree on 'killer robot' ban as nations pour billions into autonomous weapons research</a>
</strong>
</em>
</p>
<hr>
<p>The Australian Defence Force also <a href="https://tasdcrc.com.au/wp-content/uploads/2020/12/ADF-Concept-Robotics.pdf">believes</a> RAS will be cheaper than large platforms. Inflation means money already committed to defence has less purchasing power. RAS present an opportunity to achieve the same outcomes at a lower cost.</p>
<p>Meanwhile, in 2018, the Australian government outlined its intention to become a top-ten <a href="https://www.ft.com/content/d743d758-04b2-11e8-9650-9c0ad2d7c5b5">defence exporter</a>. There are keen <a href="https://breakingdefense.com/2022/03/aussies-aim-for-1b-in-exports-of-loyal-wingman-now-ghost-bat/">hopes</a> the Ghost Bat will become a successful defence export. </p>
<p>At the same time, the government is keen to <a href="https://apo.org.au/sites/default/files/resource-files/2016-02/apo-nid93621.pdf">build</a> closer ties between defence, industry and academia. Industry and academia both vie for defence funding, and this drives development of RAS.</p>
<p>Of course, the technology is new. It’s not guaranteed RAS will save lives, save money or achieve military advantage. The extent to which RAS will be used, and what they will be used for, is not foreseeable. It is in this uncertainty that New Zealand must make judgments about AWS and alliance management.</p>
<figure class="align-center ">
<img alt="Armed Autonomous aerial vehicle on runway" src="https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Autonomous systems are seen as a way to save lives.</span>
<span class="attribution"><span class="source">Getty Images</span></span>
</figcaption>
</figure>
<h2>What this means for the trans-Tasman relationship</h2>
<p>The nuclear-powered submarines captured attention when Australia’s new AUKUS partnership with the US and UK was announced, but its primary purpose is a much broader partnership that shares defence technology, including RAS. </p>
<p>The most recent statement from the AUKUS working groups <a href="https://www.gov.uk/government/news/readout-of-aukus-joint-steering-group-meetings--2">says</a> they “will seek opportunities to engage allies and close partners”. Last week, US Deputy Secretary of State Wendy Sherman made it clear New Zealand was one such <a href="https://www.rnz.co.nz/news/political/472583/us-would-have-conversations-with-new-zealand-if-time-comes-for-others-to-join-aukus-top-diplomat">partner</a>.</p>
<p>Australia’s focus on RAS, particularly in the context of AUKUS, may soon bring alliance questions to the fore. Strategic studies expert Robert Ayson has argued AUKUS, combined with increased strategic tension, <a href="https://pacforum.org/publication/pacnet-48-new-zealand-and-aukus-affected-without-being-included">means</a> that “year by year New Zealand’s alliance commitment to the defence of Australia will carry bigger implications”. AWS will play a role in these implications.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/nukes-allies-weapons-and-cost-4-big-questions-nzs-defence-review-must-address-188732">Nukes, allies, weapons and cost: 4 big questions NZ's defence review must address</a>
</strong>
</em>
</p>
<hr>
<p>AWS may seem an insignificant trans-Tasman difference compared to the use of nuclear technologies. But AWS come with a lot more uncertainty and fuzziness than, say, <a href="https://www.smh.com.au/world/oceania/not-in-our-waters-ardern-says-no-to-visits-from-australia-s-new-nuclear-subs-20210916-p58s7k.html">banning</a> nuclear-powered submarines in New Zealand waters. This fuzziness creates ample room for misperceptions and poor communication.</p>
<p>Trust in alliance relationships is easily damaged, and difficult to manage. Clear communication and ensuring a good understanding of each other’s positions is essential. The ambiguity of AWS makes these things difficult. </p>
<p>New Zealand and Australia may need to clarify their respective positions before Australia’s defence review is released next March. Otherwise, they run the risk of fuelling misunderstandings at a delicate moment for trans-Tasman relations.</p><img src="https://counter.theconversation.com/content/188520/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sian Troath receives funding from The Royal Society of New Zealand Marsden Fund.</span></em></p>Diverging views on automated weapons systems could make it difficult for Australia and New Zealand to manage military ties at a delicate time in trans-Tasman relations.Sian Troath, Postdoctoral fellow, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1853992022-06-21T02:38:27Z2022-06-21T02:38:27Z‘Bet you’re on the list’: how criticising ‘smart weapons’ got me banned from Russia<figure><img src="https://images.theconversation.com/files/469892/original/file-20220621-13-ukl5qx.jpeg?ixlib=rb-1.1.0&rect=0%2C24%2C4031%2C2993&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://photos.aap.com.au/search/20220614001669403745">Pavel Nemecek / AP</a></span></figcaption></figure><p>I woke up on Friday morning a pawn in a Kafka-esque story. Except I hadn’t been transformed into a chess piece but was a diplomatic pawn, a small player in a much larger international story. I read the news that I and 119 other “prominent” Australians were <a href="https://www.theguardian.com/world/2022/jun/16/russia-bans-121-australians-including-journalists-and-defence-officials">banned from travelling to Russia “indefinitely”</a>. </p>
<p>The Russian sanctions were a response to <a href="https://www.dfat.gov.au/international-relations/security/sanctions/sanctions-regimes/russia-sanctions-regime">Western sanctions</a> and the “spreading of false information about Russia”. The Russian Foreign Ministry announced 121 people had been sanctioned but, in a beautifully Russian bureaucratic bungle, Air Vice-Marshal Darren Goldie was banned twice, making it just 120 of us on <a href="https://www.mid.ru/ru/foreign_policy/news/1818118/">the list</a>. </p>
<p>As usual, I was the second person in my family to know. My wife had woken before me and was listening to the news. “Russia has banned a bunch more Australians,” she told me. “Bet you’re on the list.” </p>
<p>The rest of the list was made up of journalists, business people, army officials, politicians and the odd academic like myself. What unites us is our outspoken criticism of Russia’s actions in Ukraine. </p>
<h2>No more trips to Russia</h2>
<p>This is one club of which I am proud to be a member. </p>
<p>And rather than silence the critics, Russia’s actions only give our concerns more exposure. After all, you wouldn’t be reading this if Russia hadn’t banned me.</p>
<p>I have a number of Russian friends and colleagues that I am saddened now not to be able to visit. I was at a conference in Moscow a few years ago and had a great time. I promised then to return to see the delights of St Petersburg. </p>
<p>And I always imagined one day I’d follow Paul Theroux’s footsteps on the trans-Siberian express. But it seems I will now only ever read about such adventures from the comfort of my armchair. </p>
<h2>AI-powered landmines</h2>
<p>This brings me to my outspoken criticism of Russia’s actions in Ukraine. </p>
<p>At the start of last week, I had the pleasure to speak about artificial intelligence (AI) at <a href="https://www.theregister.com/2022/06/10/devfest_for_ukraine_june_1415/">DevFest Ukraine</a>, an online charity event put on by the tech community that raised over US$100,000 for those impacted by Russia’s invasion. And, in acknowledging the ownership of the land on which I was speaking, I acknowledged the ownership of all lands illegally occupied including those in Ukraine. </p>
<p>But I am sure it was another act that was the cause of my sanction: casting doubt on Russia’s claims about AI. In April, I was interviewed for <a href="https://www.theaustralian.com.au/business/technology/a-russian-claim-that-its-devastating-antipersonnel-mines-can-distinguish-between-soldiers-and-civilians-is-bogus-says-australias-toby-walsh/news-story/6bdd96cf39f9a0bf96c5a2de3af9a512">a story about Russian weaponry</a> in the Australian – and as the author is the only tech journalist who made the Russian list, I’m confident that article is to blame. </p>
<p>I can just imagine the Russian official in some non-descript office in bowels of the Foreign Ministry reading the Australian and pulling out the file to which my name was added. </p>
<p>The article reported my significant concerns about Russia’s use of <a href="https://www.hrw.org/news/2022/03/29/ukraine-russia-uses-banned-antipersonnel-landmines">the “smart” AI-enabled POM-3 anti-personnel mine in Ukraine</a>. </p>
<p>Such mines are banned by the 1997 <a href="https://www.un.org/disarmament/anti-personnel-landmines-convention/">Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines</a> (informally known as the Ottawa Treaty or the Anti-Personnel Mine Ban Convention). Russia is not a party to this treaty but <a href="https://treaties.un.org/Pages/ViewDetails.aspx?src=TREATY&mtdsg_no=XXVI-5&chapter=26&clang=_en">164 states are parties to it</a>, including Australia and every country in Europe including Ukraine. </p>
<h2>A barbaric weapon</h2>
<p>The <a href="https://cat-uxo.com/explosive-hazards/landmines/pom-3-landmine">POM-3 is a particularly barbaric mine</a>, designed to cause maximum damage to humans. It’s a descendant of the German “<a href="https://www.warhistoryonline.com/war-articles/bouncing-betty.html">Bouncing Betty</a>” mine used in World War II. </p>
<p>When the mine is triggered, an expelling charge projects the warhead roughly one metre above ground level, at which point the warhead detonates. The warhead is packed with toothed rings designed to harm vital organs in a target’s body many metres away. </p>
<p>The mine is triggered by a seismic sensor that detects approaching footsteps. </p>
<p>Russia claims the mine is equipped with AI that can <a href="https://www.newscientist.com/article/2314453-russia-claims-smart-landmines-used-in-ukraine-only-target-soldiers/">recognise friendly soldiers</a>, thus minimising the risk of collateral damage. </p>
<p>This is an absurd claim. The footsteps of Ukrainian and Russian soldiers will produce the same seismic footprint. No AI can tell them apart. </p>
<h2>Not too late to limit AI weapons</h2>
<p>Russia’s wild claim illustrates a worrying trend where states will say weapons use “AI” to target combatants rather than civilians. Handing over battlefield decision-making to AI is a hugely dangerous proposition.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lethal-autonomous-weapons-and-world-war-iii-its-not-too-late-to-stop-the-rise-of-killer-robots-165822">Lethal autonomous weapons and World War III: it's not too late to stop the rise of 'killer robots'</a>
</strong>
</em>
</p>
<hr>
<p>And this is just one the many dangers of AI in warfare. Others include the lowering of the barriers to war, and the development of new weapons of mass destruction.</p>
<p>Fortunately, it’s not too late to regulate this space. Indeed, the increasing use of hi-tech drones in the conflict in Ukraine has been a wake-up call to militaries around the world that technologies like this are fundamentally <a href="https://www.dw.com/en/ukraine-how-drones-are-changing-the-way-of-war/a-61681013">changing how we fight wars</a>. </p>
<p>Discussions are moving slowly at the United Nations to limit the use of lethal autonomous weapons. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616">UN fails to agree on 'killer robot' ban as nations pour billions into autonomous weapons research</a>
</strong>
</em>
</p>
<hr>
<p>Australia has an opportunity to take leadership in this area. Australia has long been at the forefront of international efforts to combat the spread of chemical and biological weapons but has taken a back seat in the diplomatic efforts around autonomous weapons. </p>
<p>It’s time we took up the cause of regulating weapons that use AI to identify, track and target humans. I could then get back to reading about the wonderful history of Russia from my armchair.</p><img src="https://counter.theconversation.com/content/185399/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh receives funding from the Australian Research Council as an ARC Laureate Fellow.</span></em></p>Russia’s absurd claims about ‘smart’ landmines show it’s high time the world put limits on autonomous weapons.Toby Walsh, Professor of AI at UNSW, Research Group Leader, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1802442022-03-29T19:13:29Z2022-03-29T19:13:29ZDrones over Ukraine: fears of Russian ‘killer robots’ have failed to materialise<p>Drones have played a starring role in Ukraine’s defence against the ongoing Russian attack. Before the invasion experts believed Russia’s own fleets of “killer robots” were likely to be a far more potent weapon, but to date they have hardly been seen.</p>
<p>What’s going on? Ukraine’s drone program grew from <a href="https://aerorozvidka.xyz/">a crowd-funded group of hobbyists</a>, who appear to know and like their technology – even if it isn’t the cutting edge. Russia, on the other hand, seems to have swarms of next-generation autonomous weapons, but generals <a href="https://breakingdefense.com/2022/03/russia-has-a-military-professionalism-problem-and-it-is-costing-them-in-ukraine/">may lack faith in the technology</a>. </p>
<h2>Drone vs drone</h2>
<p>Ukraine is using Turkish Bayraktar TB2 armed drones, provided under <a href="https://finabel.org/turkey-and-ukraine-tb2-drone-agreement/">a deal inked last year</a>. Operated by a crew on the ground, these are essentially remote-controlled planes armed with rockets or missiles. Ukraine is also using commercially available drones.</p>
<p>Less is known about Russia’s drones, particularly new models with artificial intelligence (AI) capabilities. Last year, the Russian Ministry of Defence announced the creation of <a href="https://www.nationaldefensemagazine.org/articles/2021/7/20/russia-expanding-fleet-of-ai-enabled-weapons">a special AI department</a> with its own budget, which would begin its work in December 2021. </p>
<p>Just before invading Ukraine, <a href="https://nationalinterest.org/blog/reboot/russian-drone-swarm-technology-promises-aerial-minefield-capabilities-198640">Russian forces were seen testing new “swarm” drones</a>, as well as unmanned autonomous weapons capable of tracking and shooting down enemy aircraft. However, there is no evidence they have been used in Ukraine for that purpose. </p>
<p>This isn’t the first time these types of drones with lethal capability have featured on the world stage. Russia deployed “interceptor” drones to defend against hostile aircraft when it annexed Crimea in 2014; and, in 2020, Azerbaijan used drones against Armenia during the Nagorno-Karabakh conflict. And the US has committed to <a href="https://www.cbsnews.com/news/u-s-giving-ukraine-more-drones-a-surprisingly-lethal-weapon-in-the-war-against-russia-so-far/">providing Ukraine access to its highly portable “suicide drone”, the Switchblade</a>.</p>
<h2>Are drones the future of warfare?</h2>
<p>The world has been grappling with the concept of “killer drones” for more than two decades. Despite international and domestic law concerns, defence forces around the world are investing heavily in autonomous weapon technologies because they cost far less than a similar crewed weapon, like a tank or aircraft, and don’t place drivers or pilots at risk. </p>
<p>As military warfare becomes more technologically advanced than ever before, AI-powered drones are creating a new concept of power.</p>
<p><a href="https://www.brookings.edu/research/weapons-of-the-weak-russia-and-ai-driven-asymmetric-warfare/">As far back as 2017</a>, Russian President Vladimir Putin said the development of AI raises “colossal opportunities and threats that are difficult to predict”, warning that “the one who becomes the leader in this sphere will be the ruler of the world”.</p>
<p>The Russian leader predicted future wars would be fought by drones, and “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender”.</p>
<h2>Homemade drones</h2>
<p>Putin has previously identified the development of weapons with elements of AI as one of Russia’s five major military priorities.</p>
<p>Yet since Russia invaded Ukraine, it seems to be Ukrainian drones that are being used to greatest effect – predominantly by <a href="https://www.theguardian.com/world/2022/mar/28/the-drone-operators-who-halted-the-russian-armoured-vehicles-heading-for-kyiv">targeting Russian logistic elements</a> supplying fuel or ammunition to frontline forces. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/eyes-on-the-world-drones-change-our-point-of-view-and-our-truths-143838">Eyes on the world – drones change our point of view and our truths</a>
</strong>
</em>
</p>
<hr>
<p>Ukrainian soldiers have reportedly been using <a href="https://www.defenseone.com/ideas/2022/03/send-quadcopters-arm-ukrainian-citizens-simple-drones/362730/">drones bought off the shelf</a> to locate Russian military targets and to help coordinate artillery strikes. Reports have even emerged of Ukrainian soldiers <a href="https://www.wesh.com/article/ukraine-drone-enthusiasts-russian-invasion/39330353">jury-rigging explosives to homemade drones before flying them at Russian tanks</a>. </p>
<p>Footage of drone strikes are also proving a potent information weapon, with <a href="https://taskandpurpose.com/analysis/drones-ukraine-information-warfare/">Ukrainian soldiers uploading them to social media</a>. </p>
<h2>Where are Russia’s drones?</h2>
<p>It’s hard to know exactly why we haven’t seen a Russian drone onslaught.</p>
<p>One possible reason is that drones are being held in reserve for a later escalation in the conflict. Drones can deliver chemical, biological or even nuclear weapons without endangering a human pilot – and Russia’s current strategy suggests it may not shrink from using banned weapons.</p>
<p>Another possible reason is logistics. Given widespread reports of Russian military vehicles breaking down, Russia may not be able to support drone operations in Ukraine. </p>
<p><a href="https://breakingdefense.com/2022/03/russia-has-a-military-professionalism-problem-and-it-is-costing-them-in-ukraine/">According to RAND Institute experts</a>, however, one of the biggest reasons may be a lack of trust in the technology. </p>
<h2>Why is trust so important?</h2>
<p>All modern military forces involve trust: trust in subordinates to follow orders, and trust in commanders to give lawful orders. When a machine is used in the place of a human, a commander must be able to trust that machine as much as a human being. </p>
<p>This produces significant problems. Researchers have long been aware of “machine bias”: <a href="https://medium.com/whattolabel/bias-in-machine-learning-d15ebee7db45">the idea that we trust machines to make decisions, simply because they’re machines</a>. Yet misplaced trust in machines – especially if they are making life-and-death decisions – can have catastrophic results. </p>
<p>One way to improve trust in military drones could be to limit them to simple roles. A drone acting simply as an airborne camera can’t fake what it sees, whereas a drone scanning video footage to identify targets (what the military call a “decision support system”) is far more likely to make a fatal mistake. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lethal-autonomous-weapons-and-world-war-iii-its-not-too-late-to-stop-the-rise-of-killer-robots-165822">Lethal autonomous weapons and World War III: it's not too late to stop the rise of 'killer robots'</a>
</strong>
</em>
</p>
<hr>
<p>Another way to improve trust in drones is to refuse to arm them with lethal weapons, or program them to disarm enemy soldiers. In 2007, John Canning, a researcher at the Naval Surface Warfare Center, suggested <a href="https://www.theregister.com/2007/04/13/i_robowarrior/">future autonomous weapons might attack rifles or ammunition instead of attacking the human holding them</a>.</p>
<p>In the age of autonomous warfare, the limit will be how far we trust machines. As lethal drones become more common and familiar, how satisfied are we that these drones will make the right decisions? To use these weapons we will need to trust them, but first we will need to make sure that trust is justified.</p><img src="https://counter.theconversation.com/content/180244/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brendan Walker-Munro receives funding from the Australian Government through Trusted Autonomous Systems, a Defence Cooperative Research Centre funded through the Next Generation Technologies Fund. </span></em></p>Russia has sophisticated drone capabilities, but generals may not trust the technology enough to use it.Brendan Walker-Munro, Senior Research Fellow, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1736162021-12-20T13:14:50Z2021-12-20T13:14:50ZUN fails to agree on ‘killer robot’ ban as nations pour billions into autonomous weapons research<figure><img src="https://images.theconversation.com/files/436998/original/file-20211210-27-1o7cvsn.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5763%2C4225&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Humanitarian groups have been calling for a ban on autonomous weapons.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/march-2019-berlin-a-robot-stands-in-front-of-the-news-photo/1131801019">Wolfgang Kumm/picture alliance via Getty Images</a></span></figcaption></figure><p>Autonomous weapon systems – commonly known as killer robots – may have <a href="https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d">killed human beings for the first time ever</a> last year, according to a recent United Nations Security Council <a href="https://undocs.org/S/2021/229">report on the Libyan civil war</a>. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.</p>
<p>The United Nations <a href="https://www.un.org/disarmament/the-convention-on-certain-conventional-weapons/">Convention on Certain Conventional Weapons</a> debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but <a href="https://www.reuters.com/article/us-un-disarmament-idAFKBN2IW1UJ">didn’t reach consensus on a ban</a>. Established in 1983, the convention has been updated regularly to restrict some of the world’s cruelest conventional weapons, including land mines, booby traps and incendiary weapons.</p>
<p>Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are <a href="https://www.newsweek.com/2021/09/24/us-only-nation-ethical-standards-ai-weapons-should-we-afraid-1628986.html">investing heavily</a> in autonomous weapons research and development. The U.S. alone <a href="https://www.scientificamerican.com/article/dont-let-robots-pull-the-trigger/">budgeted US$18 billion</a> for autonomous weapons between 2016 and 2020. </p>
<p>Meanwhile, human rights and <a href="https://www.stopkillerrobots.org/">humanitarian organizations</a> are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, <a href="https://www.rand.org/blog/2020/06/the-risks-of-autonomous-weapons-systems-for-crisis.html">increasing the risk of preemptive attacks</a>, and because they could be <a href="https://foreignpolicy.com/2020/10/14/ai-drones-swarms-killer-robots-partial-ban-on-autonomous-weapons-would-make-everyone-safer/">combined with chemical, biological, radiological and nuclear weapons</a> themselves. </p>
<p>As a <a href="https://scholar.google.com/citations?user=92kUNgwAAAAJ&hl=en&oi=sra">specialist in human rights</a> with a focus on the <a href="https://muse.jhu.edu/article/761349#bio_wrap">weaponization of artificial intelligence</a>, I find that autonomous weapons make the unsteady balances and fragmented safeguards of the nuclear world – for example, the U.S. president’s minimally constrained <a href="https://wwnorton.com/books/thermonuclear-monarchy/">authority to launch a strike</a> – more unsteady and more fragmented. Given the pace of research and development in autonomous weapons, the U.N. meeting might have been the last chance to head off an arms race.</p>
<h2>Lethal errors and black boxes</h2>
<p>I see four primary dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat? </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/fPqmC16ewYg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Killer robots, like the drones in the 2017 short film ‘Slaughterbots,’ have long been a major subgenre of science fiction. (Warning: graphic depictions of violence.)</span></figcaption>
</figure>
<p>The problem here is not that machines will make such errors and humans won’t. It’s that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent <a href="https://www.reuters.com/world/asia-pacific/us-military-says-10-civilians-killed-kabul-drone-strike-last-month-2021-09-17/">U.S. drone strike in Afghanistan</a> seem like mere rounding errors by comparison.</p>
<p>Autonomous weapons expert Paul Scharre uses the metaphor of <a href="https://wwnorton.com/books/Army-of-None/">the runaway gun</a> to explain the difference. A runaway gun is a defective machine gun that continues to fire after a trigger is released. The gun continues to fire until ammunition is depleted because, so to speak, the gun does not know it is making an error. Runaway guns are extremely dangerous, but fortunately they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Autonomous weapons, by definition, have no such safeguard. </p>
<p>Importantly, weaponized AI need not even be defective to produce the runaway gun effect. As multiple studies on algorithmic errors across industries have shown, the very best algorithms – operating as designed – can <a href="https://brianchristian.org/the-alignment-problem/">generate internally correct outcomes that nonetheless spread terrible errors</a> rapidly across populations. </p>
<p>For example, a neural net designed for use in Pittsburgh hospitals identified <a href="https://www.pulmonologyadvisor.com/home/topics/practice-management/the-potential-pitfalls-of-machine-learning-algorithms-in-medicine/">asthma as a risk-reducer</a> in pneumonia cases; image recognition software used by Google <a href="https://www.usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-black-people-as-gorillas/29567465/">identified Black people as gorillas</a>; and a machine-learning tool used by Amazon to rank job candidates <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G">systematically assigned negative scores to women</a>.</p>
<p>The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them. The <a href="https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf">black box problem</a> of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems. </p>
<h2>The proliferation problems</h2>
<p>The next two dangers are the problems of low-end and high-end proliferation. Let’s start with the low end. The militaries developing autonomous weapons now are proceeding on the assumption that they will be able to <a href="https://www.popularmechanics.com/military/research/a23133118/us-ai-robots-warfare/">contain and control the use of autonomous weapons</a>. But if the history of weapons technology has taught the world anything, it’s this: Weapons spread. </p>
<p>Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the <a href="https://www.npr.org/templates/story/story.php?storyId=6539945">Kalashnikov assault rifle</a>: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Front view of a quadcopter showing its camera" src="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=289&fit=crop&dpr=1 600w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=289&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=289&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=364&fit=crop&dpr=1 754w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=364&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=364&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:STM_Kargu.png">Ministry of Defense of Ukraine</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>High-end proliferation is just as bad, however. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of <a href="https://cpr.unu.edu/publications/articles/ai-global-governance-ai-and-nuclear-weapons-promise-and-perils-of-ai-for-nuclear-stability.html">mounting chemical, biological, radiological and nuclear arms</a>. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use.</p>
<p>High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. The weapons are likely to be equipped with expensive <a href="https://smartech.gatech.edu/bitstream/handle/1853/31465/09-02.pdf">ethical governors</a> designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the “<a href="https://news.un.org/en/story/2020/07/1068041">myth of a surgical strike</a>” to quell moral protests. Autonomous weapons will also reduce both the need for and risk to one’s own soldiers, dramatically altering the <a href="https://www.jstor.org/stable/3312365?seq=1#metadata_info_tab_contents">cost-benefit analysis</a> that nations undergo while launching and maintaining wars. </p>
<p>Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. Think about the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the <a href="https://dx.doi.org/10.2139/ssrn.3804885">blowback experienced around the world today</a>. Multiply that by every country currently aiming for high-end autonomous weapons. </p>
<h2>Undermining the laws of war</h2>
<p>Finally, autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties reaching as far back as the 1864 <a href="https://www.law.cornell.edu/wex/geneva_conventions_and_their_additional_protocols">Geneva Convention</a>, are the international thin blue line separating war with honor from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. A prominent example of someone held to account is <a href="https://www.britannica.com/biography/Slobodan-Milosevic">Slobodan Milosevic</a>, former president of the Federal Republic of Yugoslavia, who was indicted on charges of crimes against humanity and war crimes by the U.N.’s International Criminal Tribunal for the Former Yugoslavia.</p>
<p>[<em>Get our best science, health and technology stories.</em> <a href="https://theconversation.com/us/newsletters/science-editors-picks-71/?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=science-best">Sign up for The Conversation’s science newsletter</a>.]</p>
<p>But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious <a href="https://www.hrw.org/news/2020/06/01/need-and-elements-new-treaty-fully-autonomous-weapons#">accountability gap</a>.</p>
<p>To hold a soldier <a href="https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=1011&context=djilp">criminally responsible</a> for deploying an autonomous weapon that commits war crimes, prosecutors would need to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult as a matter of law, and possibly unjust as a matter of morality, given that autonomous weapons are inherently unpredictable. I believe the distance separating the soldier from the independent decisions made by autonomous weapons in rapidly evolving environments is simply too great. </p>
<p>The legal and moral challenge is not made easier by shifting the blame up the chain of command or back to the site of production. In a world without regulations that mandate <a href="https://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/">meaningful human control</a> of autonomous weapons, there will be war crimes with no war criminals to hold accountable. The structure of the laws of war, along with their deterrent value, will be significantly weakened.</p>
<h2>A new global arms race</h2>
<p>Imagine a world in which militaries, insurgent groups and international and domestic terrorists can deploy theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal accountability. It is a world where the sort of unavoidable <a href="https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815">algorithmic errors</a> that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.</p>
<p>[<em>Over 140,000 readers rely on The Conversation’s newsletters to understand the world.</em> <a href="https://memberservices.theconversation.com/newsletters/?source=inline-140ksignup">Sign up today</a>.]</p>
<p>In my view, the world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.</p>
<p><em>This is an updated version of an <a href="https://theconversation.com/an-autonomous-robot-may-have-already-killed-people-heres-how-the-weapons-could-be-more-destabilizing-than-nukes-168049">article</a> originally published on September 29, 2021.</em></p><img src="https://counter.theconversation.com/content/173616/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James Dawes does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Sci-fi nightmares of a robot apocalypse aside, autonomous weapons are a very real threat to humanity. An expert on the weapons explains how the emerging arms race could be humanity’s last.James Dawes, Professor of English, Macalester CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1680492021-09-29T12:23:40Z2021-09-29T12:23:40ZAn autonomous robot may have already killed people – here’s how the weapons could be more destabilizing than nukes<figure><img src="https://images.theconversation.com/files/423433/original/file-20210927-21-wsi2zg.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4928%2C3280&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The term 'killer robot' often conjures images of Terminator-like humanoid robots. Militaries around the world are working on autonomous machines that are less scary looking but no less lethal.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:W-MUTT_-_Ship-to-Shore_Maneuver_Exploration_and_Experimentation_2017_01.jpg">John F. Williams/U.S. Navy</a></span></figcaption></figure><p><em>An updated version of this article was published on Dec. 20, 2021. <a href="https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616">Read it here</a>.</em></p>
<p>Autonomous weapon systems – commonly known as killer robots – may have <a href="https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d">killed human beings for the first time ever</a> last year, according to a recent United Nations Security Council <a href="https://undocs.org/S/2021/229">report on the Libyan civil war</a>. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.</p>
<p>Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are <a href="https://www.newsweek.com/2021/09/24/us-only-nation-ethical-standards-ai-weapons-should-we-afraid-1628986.html">investing heavily</a> in autonomous weapons research and development. The U.S. alone <a href="https://www.scientificamerican.com/article/dont-let-robots-pull-the-trigger/">budgeted US$18 billion</a> for autonomous weapons between 2016 and 2020. </p>
<p>Meanwhile, human rights and <a href="https://www.stopkillerrobots.org/">humanitarian organizations</a> are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, <a href="https://www.rand.org/blog/2020/06/the-risks-of-autonomous-weapons-systems-for-crisis.html">increasing the risk of preemptive attacks</a>, and because they could become <a href="https://foreignpolicy.com/2020/10/14/ai-drones-swarms-killer-robots-partial-ban-on-autonomous-weapons-would-make-everyone-safer/">combined with chemical, biological, radiological and nuclear weapons</a> themselves. </p>
<p>As a <a href="https://scholar.google.com/citations?user=92kUNgwAAAAJ&hl=en&oi=sra">specialist in human rights</a> with a focus on the <a href="https://muse.jhu.edu/article/761349#bio_wrap">weaponization of artificial intelligence</a>, I find that autonomous weapons make the unsteady balances and fragmented safeguards of the nuclear world – for example, the U.S. president’s minimally constrained <a href="https://wwnorton.com/books/thermonuclear-monarchy/">authority to launch a strike</a> – more unsteady and more fragmented.</p>
<h2>Lethal errors and black boxes</h2>
<p>I see four primary dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat? </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/fPqmC16ewYg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Killer robots, like the drones in the 2017 short film ‘Slaughterbots,’ have long been a major subgenre of science fiction. (Warning: graphic depictions of violence.)</span></figcaption>
</figure>
<p>The problem here is not that machines will make such errors and humans won’t. It’s that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent <a href="https://www.reuters.com/world/asia-pacific/us-military-says-10-civilians-killed-kabul-drone-strike-last-month-2021-09-17/">U.S. drone strike in Afghanistan</a> seem like mere rounding errors by comparison.</p>
<p>Autonomous weapons expert Paul Scharre uses the metaphor of <a href="https://wwnorton.com/books/Army-of-None/">the runaway gun</a> to explain the difference. A runaway gun is a defective machine gun that continues to fire after a trigger is released. The gun continues to fire until ammunition is depleted because, so to speak, the gun does not know it is making an error. Runaway guns are extremely dangerous, but fortunately they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Autonomous weapons, by definition, have no such safeguard. </p>
<p>Importantly, weaponized AI need not even be defective to produce the runaway gun effect. As multiple studies on algorithmic errors across industries have shown, the very best algorithms – operating as designed – can <a href="https://brianchristian.org/the-alignment-problem/">generate internally correct outcomes that nonetheless spread terrible errors</a> rapidly across populations. </p>
<p>For example, a neural net designed for use in Pittsburgh hospitals identified <a href="https://www.pulmonologyadvisor.com/home/topics/practice-management/the-potential-pitfalls-of-machine-learning-algorithms-in-medicine/">asthma as a risk-reducer</a> in pneumonia cases; image recognition software used by Google <a href="https://www.usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-black-people-as-gorillas/29567465/">identified African Americans as gorillas</a>; and a machine-learning tool used by Amazon to rank job candidates <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G">systematically assigned negative scores to women</a>.</p>
<p>The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them. The <a href="https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf">black box problem</a> of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems. </p>
<h2>The proliferation problems</h2>
<p>The next two dangers are the problems of low-end and high-end proliferation. Let’s start with the low end. The militaries developing autonomous weapons now are proceeding on the assumption that they will be able to <a href="https://www.popularmechanics.com/military/research/a23133118/us-ai-robots-warfare/">contain and control the use of autonomous weapons</a>. But if the history of weapons technology has taught the world anything, it’s this: Weapons spread. </p>
<p>Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the <a href="https://www.npr.org/templates/story/story.php?storyId=6539945">Kalashnikov assault rifle</a>: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Front view of a quadcopter showing its camera" src="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=289&fit=crop&dpr=1 600w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=289&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=289&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=364&fit=crop&dpr=1 754w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=364&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=364&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:STM_Kargu.png">Ministry of Defense of Ukraine</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>High-end proliferation is just as bad, however. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of <a href="https://cpr.unu.edu/publications/articles/ai-global-governance-ai-and-nuclear-weapons-promise-and-perils-of-ai-for-nuclear-stability.html">mounting chemical, biological, radiological and nuclear arms</a>. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use.</p>
<p>High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. The weapons are likely to be equipped with expensive <a href="https://smartech.gatech.edu/bitstream/handle/1853/31465/09-02.pdf">ethical governors</a> designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the <a href="https://news.un.org/en/story/2020/07/1068041">“myth of a surgical strike”</a> to quell moral protests. Autonomous weapons will also reduce both the need for and risk to one’s own soldiers, dramatically altering the <a href="https://www.jstor.org/stable/3312365?seq=1#metadata_info_tab_contents">cost-benefit analysis</a> that nations undergo while launching and maintaining wars. </p>
<p>Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. Think about the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the <a href="https://dx.doi.org/10.2139/ssrn.3804885">blowback</a> experienced around the world today. Multiply that by every country currently aiming for high-end autonomous weapons. </p>
<h2>Undermining the laws of war</h2>
<p>Finally, autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties reaching as far back as the 1864 <a href="https://www.law.cornell.edu/wex/geneva_conventions_and_their_additional_protocols">Geneva Convention</a>, are the international thin blue line separating war with honor from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. A prominent example of someone held to account is <a href="https://www.britannica.com/biography/Slobodan-Milosevic">Slobodan Milosevic</a>, former president of the Federal Republic of Yugoslavia, who was indicted on charges against humanity and war crimes by the U.N.’s International Criminal Tribunal for the Former Yugoslavia.</p>
<p>But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious <a href="https://www.hrw.org/news/2020/06/01/need-and-elements-new-treaty-fully-autonomous-weapons#">accountability gap.</a> </p>
<p>To hold a soldier <a href="https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=1011&context=djilp">criminally responsible</a> for deploying an autonomous weapon that commits war crimes, prosecutors would need to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult as a matter of law, and possibly unjust as a matter of morality, given that autonomous weapons are inherently unpredictable. I believe the distance separating the soldier from the independent decisions made by autonomous weapons in rapidly evolving environments is simply too great. </p>
<p>The legal and moral challenge is not made easier by shifting the blame up the chain of command or back to the site of production. In a world without regulations that mandate <a href="https://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/">meaningful human control</a> of autonomous weapons, there will be war crimes with no war criminals to hold accountable. The structure of the laws of war, along with their deterrent value, will be significantly weakened.</p>
<h2>A new global arms race</h2>
<p>Imagine a world in which militaries, insurgent groups and international and domestic terrorists can deploy theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal accountability. It is a world where the sort of unavoidable <a href="https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815">algorithmic errors</a> that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.</p>
<p>In my view, the world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.</p>
<p>[<em>Get our best science, health and technology stories.</em> <a href="https://theconversation.com/us/newsletters/science-editors-picks-71/?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=science-best">Sign up for The Conversation’s science newsletter</a>.]</p><img src="https://counter.theconversation.com/content/168049/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James Dawes does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Sci-fi nightmares of a robot apocalypse aside, autonomous weapons are a very real threat to humanity. An expert on the weapons explains how the emerging arms race could be humanity’s last.James Dawes, Professor of English, Macalester CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1661682021-08-19T02:28:03Z2021-08-19T02:28:03ZNew Zealand could take a global lead in controlling the development of ‘killer robots’ — so why isn’t it?<figure><img src="https://images.theconversation.com/files/416654/original/file-20210818-27-1ppaodo.jpg?ixlib=rb-1.1.0&rect=8%2C0%2C5982%2C3853&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>“New Zealand versus the killer robots” might sound like a science fiction B-movie, but that was essentially the focus of an event at parliament earlier this month.</p>
<p>Hosted by Minister of Disarmament and Arms Control Phil Twyford, the “<a href="https://www.beehive.govt.nz/speech/remarks-dialogue-autonomous-weapons-systems-and-human-control">Dialogue on Autonomous Weapons Systems and Human Control</a>” looked at how New Zealand might take more of an international lead in regulating these highly contentious new technologies.</p>
<p>Twyford warned of the danger of warfare “delegated to machines”. He referred to a <a href="http://www.converge.org.nz/pma/nz-kr-survey.pdf">recent survey</a> showing widespread public opposition to the deployment of autonomous weapons in war and strong support for government action to ban or limit their development and use.</p>
<p>The prospect of New Zealand’s leadership has been warmly received by activists and campaigners involved in the “killer robots” debate. </p>
<p>Human Rights Watch’s Mary Wareham <a href="https://www.newsroom.co.nz/politics/pace-picks-up-in-the-war-against-killer-robots">has argued</a> New Zealand leadership could act as “a total catalyst for action”, while the Campaign to Stop Killer Robots listed Twyford’s commitment as one of the “<a href="https://www.stopkillerrobots.org/about/">key actions and achievements</a>” of its campaign to date.</p>
<p>Yet New Zealand has not joined the 30 states that have formally called for a <a href="https://www.pgaction.org/declaration-support-treaty-prohibition-faw.html">ban on autonomous weapons</a>, and Twyford’s statements have tended to waver between bullish and reserved. During the event at parliament he acknowledged the clear ethical problems with autonomous weapons, but also the complexity of making policy. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1425001555355312132"}"></div></p>
<h2>Sensitivity to military allies</h2>
<p>If the mood of the people and government of New Zealand is strongly behind regulation, what makes the issue so difficult? </p>
<p>The short answer is politics and economics. A <a href="https://www.newsroom.co.nz/politics/pace-picks-up-in-the-war-against-killer-robots">major obstacle</a> for Twyford is allowing the New Zealand Defence Force to work with allies and partners. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lethal-autonomous-weapons-and-world-war-iii-its-not-too-late-to-stop-the-rise-of-killer-robots-165822">Lethal autonomous weapons and World War III: it's not too late to stop the rise of 'killer robots'</a>
</strong>
</em>
</p>
<hr>
<p>Both the US and Australia are heavily invested in pursuing cutting-edge military technologies, including robotics, artificial intelligence and autonomy. A key pillar of their strategy is building systems that <a href="https://sldinfo.com/2020/02/shaping-an-australian-navy-approach-to-maritime-remotes-artificial-intelligence-and-combat-grids/">allow more coordination</a> on the battlefield.</p>
<p>Leading a movement to have these systems regulated or banned could see New Zealand’s military shut out of joint exercises where such technologies are being trialled or used.</p>
<p>Given the <a href="https://www.rnz.co.nz/news/political/441936/where-will-new-zealand-stand-in-rising-tensions-between-china-and-other-allies">political pressure</a> to take a stronger stand against China, it seems unlikely New Zealand’s Foreign Affairs and Trade or Defence ministries will want to risk further discord with key defence partners.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1425967055489028099"}"></div></p>
<h2>Protecting high-tech industry</h2>
<p>The second hurdle lies in the economic promise of technologies developed in New Zealand that could potentially be used in autonomous weapons programmes elsewhere. </p>
<p>Many leading engineers and technologists have advocated for the <a href="https://futureoflife.org/ai-open-letter/">regulation or banning</a> of autonomous weapons, but others are attracted by the potential rewards of military-related projects. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/killer-robots-free-will-and-the-illusion-of-control-87460">Killer robots, free will and the illusion of control</a>
</strong>
</em>
</p>
<hr>
<p>These tensions have <a href="https://www.newshub.co.nz/home/politics/2021/06/rocket-lab-not-evil-but-kiwis-right-to-feel-uneasy-about-us-military-ties-journalist.html">already surfaced</a> in the debate about US military payloads being launched from New Zealand by US-owned aerospace company Rocket Lab. </p>
<p>Autonomous weapons could well see similar questions raised about other technologies developed by New Zealand companies or researchers — most obviously in the fields of computer vision, robotics and swarm intelligence — that could be used in military systems. </p>
<p>Regulating autonomous weapons without also inhibiting potentially lucrative AI and robotics research and development remains a challenge.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1425741065051537413"}"></div></p>
<h2>Public opinion not enough</h2>
<p>The hope that regulation of autonomous weapons could represent another “anti-nuclear moment” in New Zealand’s disarmament and foreign policy history therefore seems premature. </p>
<p>While it’s clear there is support for some form of regulation, there’s <a href="https://www.scoop.co.nz/stories/PO2008/S00133/killer-robots-growing-support-for-ban-but-new-zealands-stance-remains-weak.htm">little evidence</a> at this stage to suggest public opinion will sway the government’s current conservative and watchful position.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-has-already-been-weaponised-and-it-shows-why-we-should-ban-killer-robots-102736">AI has already been weaponised – and it shows why we should ban 'killer robots'</a>
</strong>
</em>
</p>
<hr>
<p>So, what should be done? In the absence of international agreement, New Zealand could press ahead with its own domestic legislation to regulate these technologies, as proposed in a <a href="https://www.parliament.nz/en/pb/petitions/document/PET_104114/petition-of-edwina-hughes-for-aotearoa-new-zealand-campaign">petition</a> from local Campaign to Stop Killer Robots coordinator Edwina Hughes. </p>
<p>This has the potential to expose a lack of serious commitment to principle in the government’s position, but it would still come up against the political and economic interests opposed to action on autonomous weapons.</p>
<p>Acknowledging those political and economic obstacles is a critical first step for meaningful public debate.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/never-mind-killer-robots-even-the-good-ones-are-scarily-unpredictable-82963">Never mind killer robots – even the good ones are scarily unpredictable</a>
</strong>
</em>
</p>
<hr>
<h2>Engagement and transparency the key</h2>
<p>In the near term, a stocktaking exercise should be undertaken to understand what research and development is being carried out in New Zealand universities and companies. </p>
<p>Efforts should also be made to understand which autonomous technologies are likely to be developed and possibly deployed in the coming years by New Zealand’s major defence partners, particularly Australia and the US. </p>
<p>Serious, sustained dialogue with commercial interests and defence partners is a necessary precondition for the advancement of Twyford’s agenda. While there is <a href="http://www.converge.org.nz/pma/nz-gge,5aug21.pdf">some evidence</a> this work is underway, it needs greater transparency to ensure public understanding of what’s at stake. </p>
<p>Without that, New Zealand will probably struggle to take an international leadership role on this critical issue.</p><img src="https://counter.theconversation.com/content/166168/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jeremy Moses receives funding from The Royal Society of New Zealand Marsden Fund. </span></em></p><p class="fine-print"><em><span>Geoffrey Ford receives funding from the Royal Society of New Zealand Marsden Fund. </span></em></p><p class="fine-print"><em><span>Sian Troath receives funding from The Royal Society of New Zealand Marsden Fund.</span></em></p>New Zealanders are worried about autonomous weapons. But military alliances with the US and Australia, and potential economic gains from local robotics research, mean NZ won’t yet take a tough stand.Jeremy Moses, Associate Professor in International Relations, University of CanterburyGeoffrey Ford, Lecturer in Digital Humanities / Postdoctoral Fellow in Political Science and International Relations, University of CanterburySian Troath, Postdoctoral fellow, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1658222021-08-12T02:12:07Z2021-08-12T02:12:07ZLethal autonomous weapons and World War III: it’s not too late to stop the rise of ‘killer robots’<figure><img src="https://images.theconversation.com/files/415601/original/file-20210811-13-fvcs86.jpg?ixlib=rb-1.1.0&rect=615%2C0%2C1023%2C675&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The STM Kargu attack drone.</span> <span class="attribution"><a class="source" href="https://www.stm.com.tr/en/kargu-autonomous-tactical-multi-rotor-attack-uav">STM</a></span></figcaption></figure><p>Last year, according to a <a href="https://undocs.org/S/2021/229">United Nations report</a> published in March, Libyan government forces hunted down rebel forces using “lethal autonomous weapons systems” that were “programmed to attack targets without requiring data connectivity between the operator and the munition”. The deadly drones were <a href="https://www.stm.com.tr/en/kargu-autonomous-tactical-multi-rotor-attack-uav">Turkish-made quadcopters</a> about the size of a dinner plate, capable of delivering a warhead weighing a kilogram or so. </p>
<p>Artificial intelligence researchers like me have been <a href="https://futureoflife.org/open-letter-autonomous-weapons/">warning</a> of the advent of such lethal autonomous weapons systems, which can make life-or-death decisions without human intervention, for years. A <a href="https://iview.abc.net.au/video/NC2103H026S00">recent episode of 4 Corners</a> reviewed this and many other risks posed by developments in AI.</p>
<p>Around 50 countries are <a href="https://www.hrw.org/news/2021/08/02/killer-robots-urgent-need-fast-track-talks">meeting</a> at the UN offices in Geneva this week in the latest attempt to hammer out a treaty to prevent the proliferation of these killer devices. History shows such treaties are needed, and that they can work.</p>
<h2>The lesson of nuclear weapons</h2>
<p>Scientists are pretty good at warning of the dangers facing the planet. Unfortunately, society is less good at paying attention.</p>
<p>In August 1945, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki, killing up to 200,000 civilians. Japan surrendered days later. The second world war was over, and the Cold War began.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/world-politics-explainer-the-atomic-bombings-of-hiroshima-and-nagasaki-100452">World politics explainer: The atomic bombings of Hiroshima and Nagasaki</a>
</strong>
</em>
</p>
<hr>
<p>The world still lives today under the threat of nuclear destruction. On a dozen or so occasions since then, we have come within minutes of all-out nuclear war.</p>
<p>Well before the first test of a nuclear bomb, many scientists working on the Manhattan Project were concerned about such a future. A <a href="https://www.atomicheritage.org/key-documents/szilard-petition">secret petition</a> was sent to President Harry S. Truman in July 1945. It accurately predicted the future:</p>
<blockquote>
<p>The development of atomic power will provide the nations with new means of destruction. The atomic bombs at our disposal represent only the first step in this direction, and there is almost no limit to the destructive power which will become available in the course of their future development. Thus a nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale.</p>
<p>If after this war a situation is allowed to develop in the world which permits rival powers to be in uncontrolled possession of these new means of destruction, the cities of the United States as well as the cities of other nations will be in continuous danger of sudden annihilation. All the resources of the United States, moral and material, may have to be mobilized to prevent the advent of such a world situation …</p>
</blockquote>
<p>Billions of dollars have since been spent on nuclear arsenals that maintain the threat of mutually assured destruction, the “continuous danger of sudden annihilation” that the physicists warned about in July 1945.</p>
<h2>A warning to the world</h2>
<p>Six years ago, thousands of my colleagues issued a <a href="https://futureoflife.org/open-letter-autonomous-weapons/">similar warning</a> about a new threat. Only this time, the petition wasn’t secret. The world wasn’t at war. And the technologies weren’t being developed in secret. Nevertheless, they pose a similar threat to global stability.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/open-letter-we-must-stop-killer-robots-before-they-are-built-44577">Open letter: we must stop killer robots before they are built</a>
</strong>
</em>
</p>
<hr>
<p>The threat comes this time from artificial intelligence, and in particular the development of lethal autonomous weapons: weapons that can identify, track and destroy targets without human intervention. The media often like to call them “killer robots”.</p>
<p>Our open letter to the UN carried a stark warning.</p>
<blockquote>
<p>The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable. The endpoint of such a technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.</p>
</blockquote>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/worlds-deadliest-inventor-mikhail-kalashnikov-and-his-ak-47-126253">World's deadliest inventor: Mikhail Kalashnikov and his AK-47</a>
</strong>
</em>
</p>
<hr>
<p>Strategically, autonomous weapons are a military dream. They let a military scale its operations unhindered by manpower constraints. One programmer can command hundreds of autonomous weapons. An army can take on the riskiest of missions without endangering its own soldiers. </p>
<h2>Nightmare swarms</h2>
<p>There are many reasons, however, why the military’s dream of lethal autonomous weapons will turn into a nightmare. First and foremost, there is a strong moral argument against killer robots. We give up an essential part of our humanity if we hand to a machine the decision of whether a person should live or die. </p>
<p>Beyond the moral arguments, there are many technical and legal reasons to be concerned about killer robots. One of the strongest is that they will revolutionise warfare. Autonomous weapons will be weapons of immense destruction. </p>
<p>Previously, if you wanted to do harm, you had to have an army of soldiers to wage war. You had to persuade this army to follow your orders. You had to train them, feed them and pay them. Now just one programmer could control hundreds of weapons.</p>
<p>In some ways lethal autonomous weapons are even more troubling than nuclear weapons. To build a nuclear bomb requires considerable technical sophistication. You need the resources of a nation state, skilled physicists and engineers, and access to scarce raw materials such as uranium and plutonium. As a result, nuclear weapons have not proliferated greatly. </p>
<p>Autonomous weapons require none of this, and if produced they will likely become cheap and plentiful. They will be perfect weapons of terror. </p>
<p>Can you imagine how terrifying it will be to be chased by a swarm of autonomous drones? Can you imagine such drones in the hands of terrorists and rogue states with no qualms about turning them on civilians? They will be an ideal weapon with which to suppress a civilian population. Unlike humans, they will not hesitate to commit atrocities, even genocide.</p>
<h2>Time for a treaty</h2>
<p>We stand at a crossroads on this issue. It needs to be seen as morally unacceptable for machines to decide who lives and who dies. And for the diplomats at the UN to negotiate a treaty limiting their use, just as we have treaties to limit chemical, biological and other weapons. In this way, we may be able to save ourselves and our children from this terrible future.</p><img src="https://counter.theconversation.com/content/165822/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh is a Laureate Fellow and Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, Australia. He is a Fellow of the Australian Academy of Science and author of the recent book, “2062: The World that AI Made” that explores the impact AI will have on society, including the impact on war. </span></em></p>Like atomic bombs and chemical and biological weapons, deadly drones that make their own decisions must be tightly controlled by an international treaty.Toby Walsh, Professor of AI at UNSW, Research Group Leader, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1600952021-05-04T02:13:10Z2021-05-04T02:13:10ZIs ‘Spot’ a good dog? Why we’re right to worry about unleashing robot quadrupeds<figure><img src="https://images.theconversation.com/files/398482/original/file-20210503-15-1grtu2b.jpg?ixlib=rb-1.1.0&rect=134%2C416%2C3664%2C2652&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">GettyImages</span></span></figcaption></figure><p>When it comes to dancing, pulling a sled, climbing stairs or doing tricks, “Spot” is definitely a good dog. It can navigate the built environment and perform a range of tasks, clearly demonstrating its flexibility as a software and hardware platform for commercial use.</p>
<p>Viral videos of Boston Dynamics’ robotic quadruped showcasing those abilities have been a key pillar of its marketing strategy. But earlier this year, when a New York art collective harnessed Spot to make a different point, the company was quick to deny its potential for harm. </p>
<p>The project, “<a href="https://spotsrampage.com/">Spot’s Rampage</a>”, involved fitting a sample of the robotic dog with a paintball gun and allowing internet users to take remote control of the creature to destroy various art works in a gallery. It ended with Spot <a href="https://nerdist.com/article/art-experiment-boston-dynamics-robot-spot-mafunction/">failing to function correctly</a>, but Boston Dynamics used Twitter to <a href="https://twitter.com/BostonDynamics/status/1362921918781943816">strongly criticise</a> the stunt:</p>
<blockquote>
<p>We condemn the portrayal of our technology in any way that promotes violence, harm, or intimidation. Our mission is to create and deliver surprisingly capable robots that inspire, delight and positively impact society.</p>
</blockquote>
<p>“Spot’s Rampage” was not the first to imagine the potential to use robot quadrupeds for violent ends. Spot also <a href="https://ew.com/tv/2017/12/29/black-mirror-metalhead-interview/">inspired</a> the “Metalhead” episode of dystopian TV series Black Mirror, in which robot quadrupeds relentlessly pursue and kill human prey. </p>
<p>This is more than science fiction, however. A serious debate over the regulation or banning of <a href="https://www.un.org/disarmament/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/">lethal autonomous weapons systems</a> is happening under the auspices of the United Nations, including how such systems should comply with existing humanitarian laws. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/wlkCQXHEgjA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Robot anxiety</h2>
<p>This contrast between the potentially violent robot of “Spot’s Rampage” and “Metalhead” and Boston Dynamics’ insistence that Spot be viewed as a force for good illustrates the tensions we have observed in our research. </p>
<p>As part of a larger project looking at <a href="https://mappinglaws.net/">debates on lethal autonomous weapons</a>, we made a detailed study of 88,970 tweets about Spot from 2007 to 2020. The results indicate public responses have been significantly less positive than Boston Dynamics would like.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/female-robots-are-seen-as-being-the-most-human-why-158666">Female robots are seen as being the most human. Why?</a>
</strong>
</em>
</p>
<hr>
<p>Despite the generally playful and peaceful presentation of Spot in Boston Dynamics’ videos, and obvious public interest and fascination with the technology, there is also recurring scepticism and concern from Twitter users. </p>
<p>The word cloud below maps the most commonly used negative language in those tweets. Words such as “terrifying”, “war” and “doomed” are noticeably prominent.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/398263/original/file-20210503-13-1wupab6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/398263/original/file-20210503-13-1wupab6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/398263/original/file-20210503-13-1wupab6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=454&fit=crop&dpr=1 600w, https://images.theconversation.com/files/398263/original/file-20210503-13-1wupab6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=454&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/398263/original/file-20210503-13-1wupab6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=454&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/398263/original/file-20210503-13-1wupab6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=571&fit=crop&dpr=1 754w, https://images.theconversation.com/files/398263/original/file-20210503-13-1wupab6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=571&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/398263/original/file-20210503-13-1wupab6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=571&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>Analysis of the use of emotive words shows recurring features of conversations about Spot: dark humour, sarcasm and suspicion about the intended uses of the technology. Associations between Boston Dynamics, its previous owner Google, the military and killer robots portrayed in popular culture (such the Terminator films) also recur.</p>
<p>Boston Dynamics CEO Robert Playter has <a href="https://www.cbsnews.com/news/boston-dynamics-robots-humans-animals-60-minutes-2021-03-28/">dismissed</a> such negative public reactions as “fiction” grounded in the “rogue robot story” and misunderstandings of the technology. </p>
<p>Depictions of robots that will not harm humans and even save lives have been a mainstay of public messaging by both Boston Dynamics and the US military’s Defense Advanced Research Projects Agency. From <a href="https://www.npr.org/sections/coronavirus-live-updates/2020/04/24/844770815/meet-spot-the-robot-that-could-help-doctors-remotely-treat-covid-19-patients">fighting COVID-19</a> to <a href="https://www.subtchallenge.com/">search and rescue</a> to <a href="https://www.businessinsider.com.au/us-marines-testing-boston-dynamics-robot-called-spot-2015-9?r=US&IR=T">taking soldiers out of harm’s way</a>, the potential humanitarian applications of robotic quadrupeds in civilian and military service are emphasised.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1387471778528866310"}"></div></p>
<h2>Military connections</h2>
<p>But negative reactions should not be too easily discounted. Boston Dynamics’ technology was advanced through military funding, and military applications have been seen as a key market. In 2019, company founder and then CEO <a href="https://www.boston.com/news/technology/2019/10/28/boston-dynamics-robots-terrifying">Marc Raibert signalled</a> Boston Dynamics “will probably have military customers”. </p>
<p>And in the month following the “Spot’s Rampage” prank, the robot was <a href="https://www.theverge.com/2021/2/24/22299140/nypd-boston-dynamics-spot-robot-dog">tested by the NYPD</a> and by <a href="https://www.theverge.com/2021/4/7/22371590/boston-dynamics-spot-robot-military-exercises-french-army">French armed forces</a> in combat exercises — although public backlash against the NYPD trials have <a href="https://www.theverge.com/2021/4/29/22409559/nypd-robot-dog-digidog-boston-dynamics-contract-terminated">brought an early end</a> to its contract with Boston Dynamics.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/abusing-a-robot-wont-hurt-it-but-it-could-make-you-a-crueller-person-126187">Abusing a robot won't hurt it, but it could make you a crueller person</a>
</strong>
</em>
</p>
<hr>
<p>Meanwhile, other robotics companies, including Boston Dynamics’ competitor Ghost Robotics, have actively and successfully sought <a href="https://www.thedrive.com/the-war-zone/36229/the-air-force-just-tested-robot-dogs-for-use-in-base-security">contracts with the US military</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1374812744709459979"}"></div></p>
<p>Ghost Robotics CEO Jiren Parikh has <a href="https://breakingdefense.com/2020/01/air-force-seeks-innovators-at-first-abms-industry-day/">said</a> its Vision 60 quadruped “can be used for anything from perimeter security to detection of chemical and biological weapons to actually destroying a target”. </p>
<p>More recently, Ghost Robotics released <a href="https://www.youtube.com/watch?v=0iOmcubN51A">footage</a> of its robot quadruped firing a projectile into a target to demonstrate its potential use for bomb disposal.</p>
<p>The apparent flexibility of these machines, which can carry different payloads and be fitted and programmed for different missions, suggests a range of potential applications, including as lethal autonomous weapons. </p>
<p>As long as the military end-use remains uncertain and the technology itself is still developing, we should remain wary of attempts by developers, marketers and military advocates to shape and manage public sentiment with the promise of “saving lives”.</p><img src="https://counter.theconversation.com/content/160095/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jeremy Moses receives funding from the Royal Society of New Zealand Marsden Fund. </span></em></p><p class="fine-print"><em><span>Geoffrey Ford receives funding from the Royal Society of New Zealand Marsden Fund. </span></em></p>Marketing for robotic ‘dogs’ plays up their potential for good, but the debate about lethal autonomous weapons suggests public anxiety is warranted.Jeremy Moses, Associate Professor in International Relations, University of CanterburyGeoffrey Ford, Lecturer in Digital Humanities / Postdoctoral Fellow in Political Science and International Relations, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1027362018-09-06T13:19:17Z2018-09-06T13:19:17ZAI has already been weaponised – and it shows why we should ban ‘killer robots’<figure><img src="https://images.theconversation.com/files/235215/original/file-20180906-190636-aogrro.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/unmanned-air-uav-spy-above-enemy-26952160?src=-ZOKXFCzFXCQZUjYk5R16g-1-16">Oleg Yarko/Shutterstock</a></span></figcaption></figure><p>A dividing line is emerging in the debate over so-called killer robots. Many countries want to see new international law on autonomous weapon systems that can target and kill people without human intervention. But those countries already developing such weapons are instead trying to highlight their supposed benefits.</p>
<p>I witnessed this growing gulf at a recent UN meeting of more than 70 countries <a href="https://www.unog.ch/80256EE600585943/(httpPages)/7C335E71DFCB29D1C1258243003E8724?OpenDocument">in Geneva</a>, where those in favour of autonomous weapons, including the US, Australia and South Korea, were more vocal than ever. At the meeting, <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/D1A2BA4B7B71D29FC12582F6004386EF/$file/2018_GGE+LAWS_August_Working+Paper_US.pdf">the US claimed</a> that such weapons could actually make it easier to follow international humanitarian law by making military action more precise.</p>
<p>Yet it’s highly speculative to say that “killer robots” will ever be able to follow humanitarian law at all. And while politicians continue to argue about this, the spread of autonomy and artificial intelligence in existing military technology is already effectively <a href="https://www.cambridge.org/core/journals/review-of-international-studies/article/autonomous-weapons-systems-and-changing-norms-in-international-relations/8E8CC29419AF2EF403EA02ACACFCF223">setting undesirable standards</a> for its role in the use of force.</p>
<p>A series of <a href="https://futureoflife.org/open-letter-autonomous-weapons/">open letters</a> by prominent researchers speaking out against weaponising artificial intelligence have helped bring the debate about autonomous military systems to public attention. The problem is that the debate is framed as if this technology is something from the future. In fact, the questions it raises are effectively already being addressed by existing systems.</p>
<p>Most air defence systems <a href="https://www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf">already have</a> significant autonomy in the targeting process, and military aircraft have highly automated features. This means “robots” are already involved in identifying and engaging targets.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Humans still press the trigger, but for how long?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/female-military-drone-operator-wide-shot-539931541?src=eQqZybPxaHhkvow-YSqfIA-1-1">Burlingham/Shutterstock</a></span>
</figcaption>
</figure>
<p>Meanwhile, another important question raised by current technology is missing from the ongoing discussion. Remotely operated drones are currently used by several countries’ militaries to drop bombs on targets. But we know from incidents <a href="https://www.law.columbia.edu/sites/default/files/microsites/human-rights-institute/files/The%20Civilian%20Impact%20of%20Drones.pdf">in Afghanistan and elsewhere</a> that drone images aren’t enough to clearly distinguish between civilians and combatants. We also know that current AI technology can contain significant bias that effects its decision making, often with <a href="http://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai/">harmful effects</a>. </p>
<p>As future fully autonomous aircraft are likely to be used in similar ways to drones, they will probably follow the practices laid out by drones. Yet states using existing autonomous technologies are excluding them from the wider debate by referring to them as “semi-autonomous” or so-called “legacy systems”. Again, this makes the issue of “killer robots” seem more futuristic than it really is. This also prevents the international community from taking a closer look at whether these systems are fundamentally appropriate under humanitarian law.</p>
<p>Several key principles of international humanitarian law require deliberate human judgements that machines <a href="https://thebulletin.org/landing_article/why-the-world-needs-to-regulate-autonomous-weapons-and-soon/">are incapable of</a>. For example, the legal definition of who is a civilian and who is a combatant isn’t written in a way that could be programmed into AI, and <a href="https://www.tandfonline.com/doi/abs/10.1080/15027570.2010.537903">machines lack</a> the situational awareness and ability to infer things necessary to make this decision.</p>
<h2>Invisible decision making</h2>
<p>More profoundly, the more that targets are chosen and potentially attacked by machines, the less we know about how those decisions are made. Drones <a href="https://www.theguardian.com/science/the-lay-scientist/2016/feb/18/has-a-rampaging-ai-algorithm-really-killed-thousands-in-pakistan">already rely heavily</a> on intelligence data processed by “black box” algorithms that are very difficult to understand to choose their proposed targets. This <a href="http://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/">makes it harder</a> for the human operators who actually press the trigger to question target proposals.</p>
<p>As the UN continues to debate this issue, it’s worth noting that most countries in favour of banning autonomous weapons are developing countries, which are typically <a href="http://www.article36.org/wp-content/uploads/2016/04/A36-Disarm-Dev-Marginalisation.pdf">less likely</a> to attend international disarmament talks. So the fact that they are willing to speak out strongly against autonomous weapons makes their doing so all the more significant. Their history of experiencing interventions and invasions from richer, more powerful countries (such as some of the ones in favour of autonomous weapons) also reminds us that they are most at risk from this technology.</p>
<p>Given what we know about existing autonomous systems, we should be very concerned that “killer robots” will make breaches of humanitarian law more, not less, likely. This threat can only be prevented by negotiating new international law curbing their use.</p><img src="https://counter.theconversation.com/content/102736/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ingvild Bode receives funding from the Joseph Rowntree Charitable Trust. </span></em></p>The debate on autonomous weapons isn’t paying enough attention to the technology already in use.Ingvild Bode, Senior Lecturer in International Relations, University of KentLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1014272018-08-21T10:32:05Z2018-08-21T10:32:05ZBan ‘killer robots’ to protect fundamental moral and legal principles<figure><img src="https://images.theconversation.com/files/232107/original/file-20180815-2909-5xtnkd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The U.S. military is already testing a Modular Advanced Armed Robotic System.</span> <span class="attribution"><a class="source" href="https://www.marforpac.marines.mil/Exercises/RIMPAC/RIMPAC-Photos/igphoto/2001572635/">Lance Cpl. Julien Rodarte, U.S. Marine Corps</a></span></figcaption></figure><p>When drafting a <a href="https://www.britannica.com/event/Hague-Conventions">treaty on the laws of war</a> at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language. </p>
<p>This standard, known as the <a href="https://www.icrc.org/eng/resources/documents/article/other/57jnhy.htm">Martens Clause</a>, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”</p>
<p>I was the lead author of a <a href="https://www.hrw.org/node/321376">new report</a> by <a href="https://www.hrw.org/">Human Rights Watch</a> and the <a href="http://hrp.law.harvard.edu/">Harvard Law School International Human Rights Clinic</a> that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these <a href="https://theconversation.com/losing-control-the-dangers-of-killer-robots-58262">weapons</a>.</p>
<p>Representatives of more than 70 nations will gather from August 27 to 31 at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the <a href="https://www.unog.ch/80256EE600585943/(httpPages)/4F0DEF093B4860B4C1257180004B1B30?OpenDocument">Convention on Conventional Weapons</a>, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.</p>
<h2>Making rules for the unknowable</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=712&fit=crop&dpr=1 600w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=712&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=712&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=895&fit=crop&dpr=1 754w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=895&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=895&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Russian diplomat Fyodor Fyodorovich Martens, for whom the Martens Clause is named.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Friedrich_Fromhold_Martens_(1845-1909).jpg">Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.</p>
<p>Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are <a href="https://www.stopkillerrobots.org/wp-content/uploads/2018/03/KRC_Briefing_CCWApr2018.pdf">all working to develop</a> them. They argue that the technology would process information faster and keep soldiers off the battlefield.</p>
<p>The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.</p>
<h2>Principles of humanity</h2>
<p>The history of the Martens Clause shows that it is a fundamental principle of international humanitarian law. Originating in the <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=9FE084CDAC63D10FC12563CD00515C4D">1899 Hague Convention</a>, versions of it appear in all four <a href="https://www.icrc.org/eng/assets/files/publications/icrc-002-0173.pdf#page=83">Geneva Conventions</a> and <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=6C86520D7EFAD527C12563CD0051D63C">Additional Protocol I</a>. It is cited in <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=056FD614A7D05D90C12563CD0051EC75">numerous</a> <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=CB3CAB98FF67D28EC12574C60038D63C">disarmament</a> <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=6D8BF0E4ABD74D62C125825D004955B1">treaties</a>. In 1995, concerns under the Martens Clause motivated countries to adopt a <a href="https://ihl-databases.icrc.org/ihl/INTRO/570">preemptive ban on blinding lasers</a>. </p>
<p>The principles of humanity require humane treatment of others and respect for human life and dignity. Fully autonomous weapons could not meet these requirements because they would be unable to feel compassion, an emotion that inspires people to minimize suffering and death. The weapons would also lack the legal and ethical judgment necessary to ensure that they protect civilians in complex and unpredictable conflict situations.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=436&fit=crop&dpr=1 600w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=436&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=436&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=548&fit=crop&dpr=1 754w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=548&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=548&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Under human supervision – for now.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Marine_Corps_Warfighting_Laboratory_MAGTAF_Integrated_Experiment_(MCWL)_160709-M-OB268-165.jpg">Pfc. Rhita Daniel, U.S. Marine Corps</a></span>
</figcaption>
</figure>
<p>In addition, as inanimate machines, these weapons could not truly understand the value of an individual life or the significance of its loss. Their algorithms would translate human lives into numerical values. By making lethal decisions based on such algorithms, they would reduce their human targets – whether civilians or soldiers – to objects, undermining their human dignity.</p>
<h2>Dictates of public conscience</h2>
<p>The growing opposition to fully autonomous weapons shows that they also conflict with the dictates of public conscience. Governments, experts and the general public have all objected, often on moral grounds, to the possibility of losing human control over the use of force.</p>
<p>To date, <a href="https://www.stopkillerrobots.org/wp-content/uploads/2018/04/KRC_CountryViews_13Apr2018.pdf">26 countries</a> have expressly supported a ban, including China. <a href="https://www.theguardian.com/commentisfree/2018/apr/11/killer-robot-weapons-autonomous-ai-warfare-un">Most countries</a> that have spoken at the U.N. meetings on conventional weapons have called for maintaining some form of meaningful human control over the use of force. Requiring such control is effectively the same as banning weapons that operate without a person who decides when to kill.</p>
<p>Thousands of <a href="https://futureoflife.org/open-letter-autonomous-weapons/">scientists and artificial intelligence experts</a> have endorsed a prohibition and demanded action from the United Nations. In July 2018, they issued a <a href="https://futureoflife.org/lethal-autonomous-weapons-pledge/">pledge not to assist</a> with the development or use of fully autonomous weapons. <a href="https://www.clearpathrobotics.com/2014/08/clearpath-takes-stance-against-killer-robots/">Major corporations</a> have also called for the prohibition.</p>
<p>More than 160 <a href="https://www.paxforpeace.nl/stay-informed/news/religious-leaders-call-for-a-ban-on-killer-robots">faith leaders</a> and more than 20 <a href="https://nobelwomensinitiative.org/nobel-peace-laureates-call-for-preemptive-ban-on-killer-robots/?ref=204">Nobel Peace Prize laureates</a> have similarly condemned the technology and backed a ban. Several <a href="http://www.openroboethics.org/wp-content/uploads/2015/11/ORi_LAWS2015.pdf">international</a> and <a href="http://duckofminerva.dreamhosters.com/wp-content/uploads/2013/06/UMass-Survey_Public-Opinion-on-Autonomous-Weapons.pdf">national</a> public opinion polls have found that a majority of people who responded opposed developing and using fully autonomous weapons.</p>
<p>The <a href="https://www.stopkillerrobots.org/">Campaign to Stop Killer Robots</a>, a coalition of 75 nongovernmental organizations from 42 countries, has led opposition by nongovernmental groups. Human Rights Watch, for which I work, co-founded and coordinates the campaign.</p>
<h2>Other problems with killer robots</h2>
<p>Fully autonomous weapons would <a href="https://www.hrw.org/sites/default/files/report_pdf/arms1216_web.pdf">threaten more</a> than humanity and the public conscience. They would likely violate other key rules of international law. Their use would create a gap in accountability because no one could be held individually liable for the unforeseeable actions of an autonomous robot.</p>
<p>Furthermore, the existence of killer robots would spark widespread proliferation and an arms race – dangerous developments made worse by the fact that fully autonomous weapons would be vulnerable to hacking or technological failures.</p>
<p>Bolstering the case for a ban, our Martens Clause assessment highlights in particular how delegating life-and-death decisions to machines would violate core human values. Our report finds that there should always be meaningful human control over the use of force. We urge countries at this U.N. meeting to work toward a new treaty that would save people from lethal attacks made without human judgment or compassion. A clear ban on fully autonomous weapons would reinforce the longstanding moral and legal foundations of international humanitarian law articulated in the Martens Clause.</p><img src="https://counter.theconversation.com/content/101427/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bonnie Docherty works as a senior researcher in the Arms Division of Human Rights Watch.</span></em></p>A standard element of international humanitarian law since 1899 should guide countries as they consider banning lethal autonomous weapons systems.Bonnie Docherty, Lecturer on Law and Associate Director of Armed Conflict and Civilian Protection, International Human Rights Clinic, Harvard Law School, Harvard UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/860862018-01-29T11:27:39Z2018-01-29T11:27:39ZArtificial intelligence is the weapon of the next Cold War<figure><img src="https://images.theconversation.com/files/203575/original/file-20180126-100908-5sdqnl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">With artificial intelligence weapons on both sides, are we in a new cold war?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/hand-robot-holding-gun-isolated-on-431271838">Dim Dimich/Shutterstock.com</a></span></figcaption></figure><p>It is easy to confuse the current geopolitical situation with that of the 1980s. The United States and Russia <a href="https://www.nytimes.com/2017/09/01/us/politics/russia-election-hacking.html">each accuse</a> <a href="http://www.businessinsider.com/putin-accuses-the-us-of-interfering-in-russias-presidential-election-2017-11">the other</a> of interfering in <a href="http://www.businessinsider.com/release-the-memo-campaign-russia-linked-twitter-accounts-2018-1">domestic affairs</a>. Russia has <a href="http://www.newsweek.com/russia-crimea-ukraine-how-putin-took-territory-without-fight-640934">annexed territory</a> over U.S. objections, raising concerns about <a href="https://www.reuters.com/article/us-usa-ukraine-arms/u-s-says-it-will-provide-ukraine-with-defensive-aid-idUSKBN1EH00X">military conflict</a>.</p>
<p>As during the Cold War <a href="https://www.nps.gov/cham/learn/historyculture/the-cold-war-and-the-chamizal-dispute.htm">after World War II</a>, nations are developing and building weapons based on advanced technology. During the Cold War, the weapon of choice was nuclear missiles; today it’s software, whether its used for attacking <a href="https://theconversation.com/artificial-intelligence-cyber-attacks-are-coming-but-what-does-that-mean-82035">computer systems</a> or <a href="https://theconversation.com/ai-researchers-should-not-retreat-from-battlefield-robots-they-should-engage-them-head-on-45367">targets in the real world</a>.</p>
<p>Russian rhetoric about the importance of artificial intelligence is picking up – and with good reason: As artificial intelligence software develops, it will be able to make decisions based on more data, and more quickly, than humans can handle. As someone who researches the use of AI for applications as diverse as <a href="https://www.mprnews.org/story/2017/05/03/future-drones-student-competitions">drones</a>, <a href="https://doi.org/10.1109/SYSOSE.2017.7994957">self-driving vehicles</a> and <a href="http://ieeexplore.ieee.org/abstract/document/7500861/">cybersecurity</a>, I worry that the world may be entering – or perhaps already in – another cold war, fueled by AI. And I’m <a href="https://www.wired.com/story/ai-could-revolutionize-war-as-much-as-nukes/">not</a> <a href="http://foreignpolicy.com/2017/11/03/the-next-space-race-is-artificial-intelligence-and-america-is-losing-to-china/">alone</a>.</p>
<h2>Modern cold war</h2>
<p>Just like the the Cold War in the 1940s and 1950s, each side has reason to fear its opponent gaining a technological upper hand. In a recent meeting at the Strategic Missile Academy near Moscow, Russian President Vladmir Putin suggested that AI may be the way Russia can <a href="https://www.rt.com/news/414107-putin-military-ai-hint/">rebalance the power shift</a> created by the U.S. outspending Russia nearly 10-to-1 on defense each year. Russia’s state-sponsored <a href="https://www.rt.com/news/414107-putin-military-ai-hint/">RT media reported</a> AI was “key to Russia beating [the] U.S. in defense.”</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=473&fit=crop&dpr=1 600w, https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=473&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=473&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=594&fit=crop&dpr=1 754w, https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=594&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/203467/original/file-20180125-100926-1nh4h4q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=594&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">What’s the 21st-century equivalent of ‘duck and cover’ against an artificial intelligence attack?</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Return-of-the-Nuclear-Era/6b8900b597714d339e4852f3e8d692e4/132/0">AP Photo, File</a></span>
</figcaption>
</figure>
<p>It sounds remarkably like the rhetoric of the Cold War, where the United States and the Soviets each built up enough nuclear weapons to <a href="http://www.historytoday.com/john-swift/soviet-american-arms-race">kill everyone on Earth many times over</a>. This arms race led to the concept of <a href="http://www.nuclearfiles.org/menu/key-issues/nuclear-weapons/history/cold-war/strategy/strategy-mutual-assured-destruction.htm">mutual assured destruction</a>: Neither side could risk engaging in open war without risking its own ruin. Instead, both sides stockpiled weapons and <a href="https://www.nps.gov/cham/learn/historyculture/the-cold-war-and-the-chamizal-dispute.htm">dueled</a> <a href="https://www.nps.gov/cham/learn/historyculture/the-cold-war-and-the-chamizal-dispute.htm">indirectly</a> via smaller armed conflicts and political disputes.</p>
<p>Now, more than 30 years after the end of the Cold War, the U.S. and Russia have decommissioned <a href="http://news.bbc.co.uk/2/hi/in_depth/6103398.stm">tens of thousands</a> of nuclear weapons. However, tensions are growing. Any modern-day cold war would include cyberattacks and nuclear powers’ involvement in allies’ conflicts. It’s already happening.</p>
<p>Both countries have <a href="https://www.bloomberg.com/news/articles/2017-08-31/u-s-orders-closing-of-russian-consulate-in-san-francisco">expelled the other’s diplomats</a>. Russia has <a href="http://www.newsweek.com/russia-crimea-ukraine-how-putin-took-territory-without-fight-640934">annexed</a> part of Crimea. The Turkey-Syria border war has even <a href="http://www.theweek.co.uk/in-depth/91141/why-the-turkey-syria-border-conflict-is-a-proxy-war-for-us-russia">been called</a> a “proxy war” between the U.S. and Russia.</p>
<p>Both countries – and <a href="http://www.icanw.org/the-facts/nuclear-arsenals/">many others too</a> – still have nuclear weapons, but their use by a major power is still unthinkable to most. However, <a href="http://www.newsweek.com/us-russia-start-new-arms-race-says-putin-ally-788354">recent</a> <a href="http://www.news.com.au/technology/innovation/the-us-and-russia-are-headed-towards-a-new-nuclear-arms-race/news-story/74da2261f90c9637464aab198c3d9caf">reports</a> show increased public concern that countries might use them.</p>
<h2>A world of cyberconflict</h2>
<p>Cyberweapons, however, particularly those powered by AI, are still considered <a href="http://www.businessinsider.com/us-retaliate-russia-hacking-election-2017-7">fair game</a> by <a href="http://www.newsweek.com/obama-ordered-cyber-bombs-response-russian-hacking-report-628597">both sides</a>.</p>
<p>Russia and <a href="https://theconversation.com/tracing-the-sources-of-todays-russian-cyberthreat-81593">Russian-supporting hackers</a> have <a href="https://www.usnews.com/news/politics/articles/2017-03-17/long-before-new-hacks-us-worried-by-russian-spying-efforts">spied electronically</a>, launched <a href="http://www.independent.co.uk/news/world/europe/russia-cyber-attack-ukraine-petya-telebots-blackenergy-sbu-cadbury-a7819501.html">cyberattacks</a> against <a href="https://www.nbcnews.com/news/us-news/feds-suspect-russians-behind-cyber-attacks-power-plants-n780701">power plants</a>, <a href="http://www.chicagotribune.com/news/opinion/commentary/ct-ransomware-attack-hacking-virus-20170515-story.html">banks, hospitals and transportation systems</a> – and <a href="http://www.npr.org/2017/08/10/542634370/russian-cyberattack-targeted-elections-vendor-tied-to-voting-day-disruptions">against U.S. elections</a>. Russian cyberattackers have targeted the <a href="https://www.wired.com/story/russian-hackers-attack-ukraine/">Ukraine</a> and U.S. allies <a href="https://www.theguardian.com/politics/2017/jun/25/cyber-attack-on-uk-parliament-russia-is-suspected-culprit">Britain</a> and <a href="https://www.reuters.com/article/us-germany-election-cyber/merkel-ally-cites-thousands-of-cyber-attacks-from-russian-ip-addresses-idUSKCN1BF1FA">Germany</a>. </p>
<p>The U.S. is <a href="https://www.scientificamerican.com/article/how-the-u-s-could-retaliate-against-russias-information-war/">certainly capable</a> of responding and <a href="https://www.washingtonpost.com/news/democracy-post/wp/2017/07/21/did-the-united-states-interfere-in-russian-elections/">may have done so</a>. </p>
<p>Putin has said he <a href="http://fortune.com/2017/09/04/ai-artificial-intelligence-putin-rule-world/">views artificial intelligence</a> as “the future, not only for Russia, but for all humankind.” In September 2017, he told students that the nation that “becomes the leader in this sphere will <a href="https://www.rt.com/news/401731-ai-rule-world-putin/">become the ruler of the world</a>.” Putin isn’t saying he’ll hand over the nuclear launch codes to a computer, though <a href="http://www.imdb.com/title/tt0086567/">science fiction</a> has portrayed <a href="https://theconversation.com/losing-control-the-dangers-of-killer-robots-58262">computers launching missiles</a>. He is talking about many other uses for AI.</p>
<h2>Use of AI for nuclear weapons control</h2>
<p>Threats posed by surprise attacks from <a href="http://www.jstor.org/stable/24997010">ship- and submarine-based</a> nuclear weapons and weapons placed near a country’s borders may lead some nations to entrust self-defense tactics – including launching counterattacks – to the rapid decision-making capabilities of an AI system.</p>
<p>In case of an attack, the AI could act more quickly and without the <a href="http://www.slate.com/articles/news_and_politics/politics/2014/04/air_force_s_nuclear_missile_corps_is_struggling_millennial_missileers_suffer.html">potential hesitation</a> or <a href="http://www.bbc.com/news/world-europe-41314948">dissent of a human operator</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=474&fit=crop&dpr=1 600w, https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=474&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=474&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=595&fit=crop&dpr=1 754w, https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=595&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/203468/original/file-20180125-100919-189oac3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=595&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Should a robot sit in this chair and be able to turn the key to launch a nuclear missile?</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Travel-Nuclear-Tourism/6fe411c36c7a4b729811ba281569023e/4/0">U.S. Air Force via AP</a></span>
</figcaption>
</figure>
<p>A fast, automated response capability could help ensure potential adversaries know a nation is ready and willing to launch, the key to <a href="http://www.nuclearfiles.org/menu/key-issues/nuclear-weapons/history/cold-war/strategy/strategy-mutual-assured-destruction.htm">mutual assured destruction</a>’s effectiveness as a deterrent. </p>
<h2>AI control of non-nuclear weapons</h2>
<p>AI can also be used to control non-nuclear weapons including unmanned vehicles like drones and cyberweapons. Unmanned vehicles must be able to operate while their communications are impaired – which requires onboard AI control. AI control also <a href="https://doi.org/10.1016/j.techsoc.2015.12.003">prevents a group that’s being targeted</a> from stopping or preventing a drone attack by destroying its <a href="http://foreignpolicy.com/2014/11/06/interview-with-a-u-s-air-force-drone-pilot-it-is-oddly-war-at-a-very-intimate-level/">control facility</a>, because control is distributed, both <a href="https://doi.org/10.1145/2810103.2810109">physically and electronically</a>. </p>
<p>Cyberweapons may, similarly, need to <a href="https://doi.org/10.1145/2810103.2810109">operate beyond the range of communications</a>. And reacting to them may require <a href="https://theconversation.com/artificial-intelligence-cyber-attacks-are-coming-but-what-does-that-mean-82035">such rapid response</a> that the responses would be best launched and controlled by AI systems. </p>
<p>AI-coordinated attacks can launch cyber or real-world weapons almost instantly, making the decision to attack before a human even notices a reason to. AI systems can change targets and techniques faster than humans can comprehend, much less analyze. For instance, an AI system might launch a drone to attack a factory, observe drones responding to defend, and launch a cyberattack on those drones, with no noticeable pause.</p>
<h2>The importance of AI development</h2>
<p>A country that thinks its adversaries have or will get AI weapons will want to get them too. Wide use of <a href="https://theconversation.com/artificial-intelligence-cyber-attacks-are-coming-but-what-does-that-mean-82035">AI-powered cyberattacks</a> may still be some time away. </p>
<p>Countries might agree to a proposed <a href="https://www.wired.com/2017/05/microsoft-right-need-digital-geneva-convention/">Digital Geneva Convention</a> to limit AI conflict. But that won’t stop AI attacks by <a href="https://www.wsj.com/articles/putin-says-anti-russian-sentiment-is-counterproductive-1496318628">independent nationalist groups</a>, militias, criminal organizations, terrorists and others – and countries can back out of treaties. It’s almost certain, therefore, that someone will turn AI into a weapon – and that everyone else will do so too, even if only out of a desire to be prepared to defend themselves.</p>
<p>With Russia <a href="http://fortune.com/2017/09/04/ai-artificial-intelligence-putin-rule-world/">embracing AI</a>, other nations that don’t or those that restrict AI development risk becoming <a href="https://doi.org/10.1016/j.clsr.2010.03.003">unable to compete</a> – economically or militarily – with countries wielding developed AIs. Advanced AIs can create advantage for a nation’s businesses, not just its military, and those without AI may be severely disadvantaged. Perhaps most importantly, though, having sophisticated AIs in many countries could provide a <a href="https://doi.org/10.1016/j.techsoc.2015.12.003">deterrent against attacks</a>, as happened with nuclear weapons during the Cold War.</p><img src="https://counter.theconversation.com/content/86086/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jeremy Straub is the associate director of the NDSU Institute for Cyber Security Education and Research. He has received funding related to AI and robotics from the North Dakota State University, the NDSU Foundation and Alumni Association, the U.S. National Science Foundation, the University of North Dakota and Sigma Xi. The views presented are his own and do not necessarily represent the views of NDSU or funding agencies.</span></em></p>As tensions between the US and Russia escalate, both sides are developing technological capabilities, including artificial intelligence that could be used in conflict.Jeremy Straub, Assistant Professor of Computer Science, North Dakota State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/829632017-08-25T12:20:46Z2017-08-25T12:20:46ZNever mind killer robots – even the good ones are scarily unpredictable<figure><img src="https://images.theconversation.com/files/183350/original/file-20170824-18702-1fxlsp9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Who could have predicted it would end like this?</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>The heads of more than 100 of the world’s top artificial intelligence companies are very alarmed about the development of “killer robots”. In an <a href="https://futureoflife.org/autonomous-weapons-open-letter-2017">open letter</a> to the UN, these business leaders – including Tesla’s Elon Musk and the founders of Google’s DeepMind AI firm – warned that autonomous weapon technology could be misused by terrorists and despots or hacked to perform in undesirable ways.</p>
<p>But the real threat is much bigger – and not just from human misconduct but from the machines themselves. The research into complex systems shows how behaviour can emerge that is much more unpredictable than the sum of individual actions. On one level this means human societies can behave very differently to what you might expect just looking at individual behaviour. But it can also apply to technology. Even ecosystems of relatively simple AI programs – what we call stupid, good bots – can surprise us, and even when the individual bots are behaving well.</p>
<p>The individual elements that make up complex systems, such as economic markets or global weather, tend not to interact in a simple linear way. This make these systems very hard to model and understand. For example, even after many years of climatology, it’s still impossible to make long-term weather predictions. These systems are often very sensitive to small changes and can experience explosive feedback loops. It is also very difficult to know the precise state of such a system at any one time. All these things make these systems intrinsically unpredictable. </p>
<p>All these principles apply to large groups of individuals acting in their own way, whether that’s human societies or groups of AI bots. My colleagues and I <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0171774">recently studied</a> one type of a complex system that featured good bots used to automatically edit Wikipedia articles. These different bots are designed and exploited by Wikipedia’s trusted human editors and their underlying software is open-source and available for anyone to study. Individually, they all have a common goal of improving the encyclopaedia. Yet their collective behaviour turns out to be surprisingly inefficient.</p>
<p>These Wikipedia bots work based on well-established rules and conventions, but because the website doesn’t have a central management system there is no effective coordination between the people running different bots. As a result, we found pairs of bots that have been undoing each other’s edits for several years without anyone noticing. And of course, because these bots lack any cognition, they didn’t notice it either.</p>
<p>The bots are designed to speed up the editing process. But slight differences in the design of the bots or between people who use them can lead to a massive waste of resources in an ongoing “edit war” that would have been resolved much quicker with human editors.</p>
<p>We also found that the bots behaved differently in different language editions of Wikipedia. The rules are more or less the same, the goals are identical, the technology is similar. But in German Wikipedia, the collaboration between bots is much more efficient and productive compared to, for example, Portuguese Wikipedia. This can only be explained by the differences between the human editors who run these bots in different environments.</p>
<h2>Exponential confusion</h2>
<p>Wikipedia bots have very little autonomy and the system already operates very differently to the goals of individual bots. But the Wikimedia Foundation is <a href="https://blog.wikimedia.org/2017/07/19/scoring-platform-team/">planning to use</a> AI that will give more autonomy to the bots. That will likely lead to even more unexpected behaviour. </p>
<p>Another example is what can happen when two bots designed to speak to humans interact with each other. We’re no longer surprised by the answers given by artificial personal assistants such as the iPhone’s Siri. But put several of these kind of chatbots together and they can quickly start acting in surprising ways, arguing and even insulting each other. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/WnzlbyTZsQY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>The bigger the system becomes and the more autonomous each bot is, the more complex and hence unpredictable the future behaviour of the system will be. Wikipedia is an example of large number of relatively simple bots. The chatbots example is a small number of rather sophisticated and creative bots – in both cases unexpected conflicts emerged. The complexity and therefore unpredictability increases exponentially as you add more and more individuals to the system. So in a future system with a large number of very sophisticated robots, the unexpected behaviour could go beyond our imagination.</p>
<h2>Self-driving madness</h2>
<p>For example, self-driving cars promise exciting advances in the efficiency and safety of road travel. But we don’t yet know what will happen once we have a large, wild system of fully autonomous vehicles. They may well behave very differently to a small set of individual cars in a controlled environment. And even more unexpected behaviour might occur when driverless cars “trained” by different humans in different environments start interacting with each another.</p>
<p>Humans can adapt to new rules and conventions relatively quickly but can still have trouble switching between systems. This can be way more difficult for artificial agents. If a “German-trained” car was driving in Italy, for example, we just don’t know how it would deal with the written rules and unwritten cultural conventions being followed by the many other “Italian-trained” cars. Something as common as crossing an intersection could become lethally risky because we just wouldn’t know if the cars would interact as they were supposed to or whether they would do something completely unpredictable.</p>
<p>Now think of the killer robots that Elon Musk and his colleagues are worried about. A single killer robot could be very dangerous in wrong hands. But what about an unpredictable system of killer robots? I don’t even want to think about it.</p><img src="https://counter.theconversation.com/content/82963/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Taha Yasseri receives funding from the European Commission and Google. </span></em></p>The unexpected behaviour of even simple bots is only going to get more dramatic as AI scales up.Taha Yasseri, Research Fellow in Computational Social Science, Oxford Internet Institute, University of OxfordLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/807412017-07-12T06:40:42Z2017-07-12T06:40:42ZWe’re close to banning nuclear weapons – killer robots must be next<figure><img src="https://images.theconversation.com/files/177631/original/file-20170710-26770-1ailwor.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">International flags fly at United Nations headquarters, New York City. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/new-york-ny-usa-september-24-488226595?src=Wz3SZJ8USBb_JrBWiWyoqg-1-1">Osugi/shutterstock </a></span></figcaption></figure><p>While much of the world’s attention was focused last week on the G20 meeting in Hamburg, and Donald Trump’s first face-to-face meeting with Vladimir Putin, a historic decision took place at the United Nations (UN) in New York. </p>
<p>On Friday, 122 countries voted in favour of the “<a href="http://www.un.org/disarmament/ptnw/index.html">Treaty on the Prohibition of Nuclear Weapons</a>”. </p>
<p>Nuclear weapons were the only weapons of mass destruction without a treaty banning them, despite the fact that they are potentially the most
potent of all weapons. <a href="https://www.un.org/disarmament/wmd/bio/">Biological weapons were banned in 1975</a> and
<a href="https://www.un.org/disarmament/wmd/chemical/">chemical weapons in 1992</a>.</p>
<p>This new treaty sets the international norm that nuclear weapons are no longer morally acceptable. This is the first step along the road to their eventual elimination from our planet, although the issue of North Korea’s nuclear ambitions <a href="https://theconversation.com/as-an-historic-nuclear-weapons-treaty-is-reached-g20-leaders-miss-the-mark-on-north-korea-80464">remains unresolved</a>.</p>
<p>Earlier this year, thousands of scientists including 30 Nobel Prize winners signed an open letter calling for nuclear weapons to be banned. <a href="https://theconversation.com/why-we-signed-the-open-letter-from-scientists-supporting-a-total-ban-on-nuclear-weapons-75209">I was one</a> of the signees, and am pleased to see an outcome linked to this call so swiftly and resolutely answered. </p>
<p>More broadly, the nuclear weapon treaty offers hope for formal negotiations about lethal autonomous weapons (otherwise known as killer robots) due to start in the UN in November. Nineteen countries have <a href="http://www.stopkillerrobots.org/wp-content/uploads/2013/03/KRC_CountryViews_13Dec2016.pdf">already called for a pre-emptive ban on such weapons</a>, fearing they will be the next weapon of mass destruction that man will invent. </p>
<p>An arms race is underway to develop autonomous weapons, in every theatre of war. In the air, for instance, BAE Systems is prototyping their <a href="https://en.wikipedia.org/wiki/BAE_Systems_Taranis">Taranis drone</a>. On the sea, the US Navy has launched their first autonomnous ship, the <a href="https://en.wikipedia.org/wiki/Sea_Hunter">Sea Hunter</a>. And under the sea, Boeing has a working version of a 15 metre long <a href="http://www.boeing.com/features/2016/03/bds-echo-voyager-03-16.page">Echo Voyager autonomous submarine</a>. </p>
<h2>New treaty, new hope</h2>
<p>The nuclear weapons <a href="http://www.undocs.org/en/a/conf.229/2017/L.3/Rev.1">treaty</a> is an important step towards delegitimising nuclear weapons, and puts strong moral pressure on the nuclear states like the US, the UK and Russia to reduce and eventually to eliminate such weapons from their arsenals. The treaty also obliges states to support victims of the use and testing of nuclear weapons, and to address environmental damage caused by nuclear weapons.</p>
<p>It has to be noted that the talks at the UN and subsequent vote on the treaty were boycotted by <em>all</em> the nuclear states, as well as by a number of other countries. Australia has played a leading role in the nuclear non-proliferation treaty and other disarmament talks. Disappointingly Australia was one of these countries boycotting last week’s talks. In contrast, <a href="https://www.un.org/disarmament/ptnw/participants.html">New Zealand</a> played a leading role with their ambassador being one of the Vice-Presidents of the talks. </p>
<p>Whilst 122 countries voted for the treaty, one country (the Netherlands) voted against, and one (Singapore) abstained from the vote. </p>
<p>The treaty will open for signature by states at the United Nations in New York on September 20, 2017. It will then come into force once 50 states have signed. </p>
<p>Even though major states have boycotted previous disarmament treaties, this has not prevented the treaties having effect. The US, for instance, has never signed the <a href="http://www.un.org/Depts/mine/UNDocs/ban_trty.htm">1999 accord on anti-personnel landmines</a>, wishing to support South Korea’s use of such mines in the Demilitarized Zone (DMZ) with North Korea. Nevertheless, the <a href="https://obamawhitehouse.archives.gov/the-press-office/2014/06/27/fact-sheet-changes-us-anti-personnel-landmine-policy">US follows the accord</a> outside of the DMZ. </p>
<p>Given that 122 countries voted for the nuclear prohibition treaty, it is likely that 50 states will sign the treaty in short order, and that it will then come into force. And, as seen with the landmine accord, this will increase pressure on nuclear states like the US and Russia to reduce and perhaps even eliminate their nuclear stockpiles. </p>
<p>When the chemical weapons convention came into effect in 1993, <a href="https://www.opcw.org/fileadmin/OPCW/Fact_Sheets/English/Fact_Sheet_6_-_destruction.pdf">eight countries declared stockpiles</a>, which are now <a href="https://www.opcw.org/fileadmin/OPCW/CSP/C-21/en/c2104_e_.pdf">partially or completely eliminated</a>.</p>
<h2>Public pressure</h2>
<p>The vote also raises hope on the issue of killer robots. Two years ago, I and thousands of my colleagues signed an open letter calling for a <a href="https://theconversation.com/open-letter-we-must-stop-killer-robots-before-they-are-built-44577">ban on killer robots</a>. This pushed the issue up the agenda at the UN and helped get 123 nations to vote last December at the UN in Geneva for <a href="https://www.stopkillerrobots.org/2016/12/formal-talks/">the commencement of formal talks</a>.</p>
<p>The UN moves a little slowly at times. Nuclear disarmament is the longest sought objective of the UN, dating back to <a href="https://documents-dds-ny.un.org/doc/RESOLUTION/GEN/NR0/032/52/IMG/NR003252.pdf?OpenElement">the very first resolution adopted by the General Assembly in January 1946</a> shortly after nuclear bombs had been used by the US for the first time. Nevertheless, this is a hopeful moment in a time when hope is in short supply. </p>
<p>The UN does move in the right direction and countries can come together and act in our common interest. Bravo.</p><img src="https://counter.theconversation.com/content/80741/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Treaties banning biological and chemical weapons are in place, and the path is clear to remove nuclear weapons too. Lethal autonomous weapons (killer robots) should be next.Toby Walsh, Professor of AI at UNSW, Research Group Leader, Data61Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/702452016-12-30T21:11:36Z2016-12-30T21:11:36ZFinding trust and understanding in autonomous technologies<p>In 2016, self-driving cars went mainstream. Uber’s autonomous vehicles <a href="http://fortune.com/2016/09/14/uber-self-driving-cars-pittsburgh/">became ubiquitous</a> in neighborhoods where I live in Pittsburgh, and <a href="http://www.huffingtonpost.com/entry/california-stops-uber-self-driving-cars_us_585bda66e4b0d9a594573319">briefly in San Francisco</a>. The U.S. Department of Transportation issued <a href="https://www.transportation.gov/sites/dot.gov/files/docs/AV%20policy%20guidance%20PDF.pdf">new regulatory guidance</a> for them. Countless <a href="http://dx.doi.org/10.1126/science.aaf2654">papers</a> and <a href="http://www.nytimes.com/2016/06/24/technology/should-your-driverless-car-hit-a-pedestrian-to-save-your-life.html">columns</a> discussed how self-driving cars <a href="http://blog.caranddriver.com/self-driving-mercedes-will-prioritize-occupant-safety-over-pedestrians/">should</a> <a href="https://www.theguardian.com/technology/2016/aug/22/self-driving-cars-moral-dilemmas">solve</a> <a href="https://theconversation.com/helping-autonomous-vehicles-and-humans-share-the-road-68044">ethical quandaries</a> when things go wrong. And, unfortunately, 2016 also saw the <a href="http://www.nytimes.com/2016/07/01/business/self-driving-tesla-fatal-crash-investigation.html">first fatality involving an autonomous vehicle</a>.</p>
<p>Autonomous technologies are rapidly spreading beyond the transportation sector, into <a href="http://dx.doi.org/10.1126/scitranslmed.aad9398">health care</a>, <a href="https://www.cybergrandchallenge.com/">advanced cyberdefense</a> and even <a href="http://www.raytheon.com/capabilities/products/phalanx/">autonomous weapons</a>. In 2017, we’ll have to decide whether we can trust these technologies. That’s going to be much harder than we might expect.</p>
<p>Trust is complex and varied, but also a key part of our lives. We often trust technology <a href="http://jcr.sagepub.com/content/2/4/265">based on predictability</a>: I trust something if I know what it will do in a particular situation, even if I don’t know why. For example, I trust my computer because I know how it will function, including when it will break down. I stop trusting if it starts to behave differently or surprisingly. </p>
<p>In contrast, my trust in my wife is based on <a href="https://www.jstor.org/stable/259288">understanding her beliefs, values and personality</a>. More generally, interpersonal trust does not involve knowing exactly what the other person will do – my wife certainly surprises me sometimes! – but rather why they act as they do. And of course, we can trust someone (or something) in both ways, if we know both what they will do and why.</p>
<p>I have been exploring possible bases for our trust in self-driving cars and other autonomous technology from both ethical and psychological perspectives. These are devices, so predictability might seem like the key. Because of their autonomy, however, we need to consider the importance and value – and the challenge – of learning to trust them in the way we trust other human beings.</p>
<h2>Autonomy and predictability</h2>
<p>We want our technologies, including self-driving cars, to behave in ways we can predict and expect. Of course, these systems can be quite sensitive to the context, including other vehicles, pedestrians, weather conditions and so forth. In general, though, we might expect that a self-driving car that is repeatedly placed in the same environment should presumably behave similarly each time. But in what sense would these highly predictable cars be autonomous, rather than merely automatic?</p>
<p><a href="https://ntrs.nasa.gov/search.jsp?R=19790007441">There have</a> <a href="http://dx.doi.org/10.1080/001401399185595">been</a> <a href="http://ws680.nist.gov/publication/get_pdf.cfm?pub_id=823618">many</a> different <a href="http://dx.doi.org/10.5898/JHRI.3.2.Beer">attempts</a> to <a href="http://www.dtic.mil/dtic/tr/fulltext/u2/a601656.pdf">define</a> <a href="http://standards.sae.org/j3016_201609/">autonomy</a>, but they all have this in common: Autonomous systems can make their own (substantive) decisions and plans, and thereby can act differently than expected. </p>
<p>In fact, one reason to employ autonomy (as distinct from automation) is precisely that those systems can pursue unexpected and surprising, though justifiable, courses of action. For example, <a href="https://deepmind.com/research/alphago/">DeepMind’s AlphaGo</a> won the second game of its recent Go series against Lee Sedol in part because of <a href="https://www.wired.com/2016/03/googles-ai-viewed-move-no-human-understand/">a move that no human player would ever make, but was nonetheless the right move</a>. But those same surprises make it difficult to establish predictability-based trust. Strong trust based solely on predictability is arguably possible only for automated or automatic systems, precisely because they are predictable (assuming the system functions normally).</p>
<h2>Embracing surprises</h2>
<p>Of course, other people frequently surprise us, and yet we can trust them to a remarkable degree, even giving them life-and-death power over ourselves. Soldiers trust their comrades in complex, hostile environments; a patient trusts her surgeon to excise a tumor; and in a more mundane vein, my wife trusts me to drive safely. This interpersonal trust enables us to embrace the surprises, so perhaps we could develop something like interpersonal trust in self-driving cars?</p>
<p>In general, interpersonal trust requires an understanding of why someone acted in a particular way, even if you can’t predict the exact decision. My wife might not know exactly how I will drive, but she knows the kinds of reasoning I use when I’m driving. And it is actually relatively easy to understand why someone else does something, precisely because we all think and reason roughly similarly, though with different “raw ingredients” – our beliefs, desires and experiences. </p>
<p>In fact, we continually and unconsciously make inferences about other people’s beliefs and desires based on their actions, in large part by assuming that they think, reason and decide roughly as we do. All of these inferences and reasoning based on our shared (human) cognition enable us to understand someone else’s reasons, and thereby build interpersonal trust over time.</p>
<h2>Thinking like people?</h2>
<p>Autonomous technologies – self-driving cars, in particular – do not think and decide like people. There have been efforts, both <a href="http://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=33607">past</a> and <a href="http://dx.doi.org/10.1126/science.aab3050">recent</a>, to develop computer systems that think and reason like humans. However, one consistent theme of machine learning over the past two decades has been the enormous gains made precisely by not requiring our artificial intelligence systems to operate in human-like ways. Instead, machine learning algorithms and systems such as AlphaGo have often been able to <a href="http://dx.doi.org/10.1038/nature16961">outperform human experts</a> by focusing on specific, localized problems, and then solving them quite differently than humans do.</p>
<p>As a result, attempts to interpret an autonomous technology in terms of human-like beliefs and desires can go spectacularly awry. When a human driver sees a ball in the road, most of us automatically slow down significantly, to avoid hitting a child who might be chasing after it. If we are riding in an autonomous car and see a ball roll into the street, we expect the car to recognize it, and to be prepared to stop for running children. The car might, however, see only an obstacle to be avoided. If it swerves without slowing, the humans on board might be alarmed – and a kid might be in danger.</p>
<p>Our inferences about the “beliefs” and “desires” of a self-driving car will almost surely be erroneous in important ways, precisely because the car doesn’t have any human-like beliefs or desires. We cannot develop interpersonal trust in a self-driving car simply by watching it drive, as we will not correctly infer the whys behind its actions. </p>
<p>Of course, society or marketplace customers could insist en masse that self-driving cars have human-like (psychological) features, precisely so we could understand and develop interpersonal trust in them. This strategy would give a whole new meaning to “<a href="http://dx.doi.org/10.1002/9781118984390.ch1">human-centered design</a>,” since the systems would be designed specifically so their actions are interpretable by humans. But it would also require including novel <a href="http://stanford.edu/%7Enikmart/papers/hri16paper_CameraReady_small.pdf">algorithms</a> and <a href="http://repository.cmu.edu/cgi/viewcontent.cgi?article=1147&context=dissertations">techniques</a> in the self-driving car, all of which would represent a massive change from current research and development strategies for self-driving cars and other autonomous technologies.</p>
<p>Self-driving cars have the potential to radically reshape our transportation infrastructure in many beneficial ways, but only if we can trust them enough to actually use them. And ironically, the very feature that makes self-driving cars valuable – their flexible, autonomous decision-making across diverse situations – is exactly what makes it hard to trust them.</p><img src="https://counter.theconversation.com/content/70245/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Danks does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The ethics and psychology of trust suggest ways we might learn to understand self-driving cars, but also show why doing so might be more challenging than we expect.David Danks, Professor of Philosophy and Psychology, Carnegie Mellon UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/681552016-12-14T19:08:28Z2016-12-14T19:08:28ZStar Wars: Rogue One highlights an uncomfortable fact – military robots can change sides<p>The latest Star Wars movie, <a href="http://www.imdb.com/title/tt3748528/">Rogue One</a> introduces us to a new droid <a href="http://starwars.wikia.com/wiki/K-2SO">K-2SO</a> that is the robotic lead of the story. </p>
<p>Without giving away too many spoilers, K-2SO is part of the Rebellion freedom fighter group that are tasked with stealing the plans to the first <a href="http://starwars.wikia.com/wiki/Death_Star">Death Star</a>, the infamous moon-sized battle station from the original <a href="http://www.imdb.com/title/tt0076759/">Star Wars</a> movie.</p>
<p>The significance of K-2SO is his back-story. K-2SO is an autonomous military robot that used to fight for the Rebellion’s enemy – the Imperial Empire. He was captured and reprogrammed by the rebels and is now a core member of Rogue One group.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/YWNvdoRnNv8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Rogue One: A Star Wars Story trailer ‘Trust’</span></figcaption>
</figure>
<p>K-2SO is not the first robot to swap sides in a movie. Remember the Terminator’s initial mission was to kill Sarah Connor in the <a href="http://www.imdb.com/title/tt0088247/">first movie</a>, before being reprogrammed in later movies to protect her and her son John Connor.</p>
<p>This does then raise the question of whether in real life a programmed military machine could be encouraged, reprogrammed or hacked to defect.</p>
<h2>Soldiers swapping sides</h2>
<p>The idea of human soldiers swapping sides during wars and conflicts is nothing new. There are numerous examples of soldiers surrendering and then announcing that they have information and would like to help and sometimes fight for their captors.</p>
<p>It is the information about battle plans and tactics that these defecting soldiers have that could potentially change the course of a battle or a military campaign.</p>
<p>One of the most famous defectors was <a href="http://www.biography.com/people/benedict-arnold-9189320">General Benedict Arnold</a>. Arnold was a general of the American Army during the American War of Independence, but he defected to the British Army and became a brigadier general. He led British forces against the Americans and retired to London after the war.</p>
<h2>Weapons technology</h2>
<p>The industrial revolution and the rise of mechanical weapons such as tanks, aircraft and submarines in the early 20th century changed the nature of defecting.</p>
<p>It was the development of more and more advanced weapons that gave a nation its advantage over its military rivals. Stealing an enemy’s new weapons was almost impossible and so it was up to defectors to deliver the plans of the new weapons or sometimes, examples of the actual weapons to the other side. </p>
<p><a href="https://www.strategypage.com/cic/docs/cic304b.asp">Martin Monti</a>, of the United States Army Air Corps, defected to Italy during 1944 and handed over a photographic reconnaissance version of the <a href="http://www.lockheedmartin.com.au/us/100years/stories/p-38.html">P-38 Lightning</a> aircraft to the Nazi military. He then joined the Nazi SS. </p>
<p>In 1976, during the Cold War, <a href="http://www.bbc.com/future/story/20160905-the-pilot-who-stole-a-secret-soviet-fighter-jet">Viktor Belenko</a>, flew his highly-secret MiG-25 jet fighter from the USSR to Japan.</p>
<p>NATO had long wanted to get the technical details of this aircraft as it was rumoured to be able to fly three times faster than the speed of sound.</p>
<p>Japan gave the US access to the MiG and Belenko was eventually granted citizenship of the US. The plane was stripped and analysed by the Americans who also had a copy of the aircraft’s technical manual that Belenko had also brought with him. </p>
<h2>Defectors not necessary</h2>
<p>In the 21st century we have seen the development of remotely controlled systems for reconnaissance, surveillance and the delivery of weapons to targets. Such systems are likely <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2722311">to be very important</a> in the future of defence capabilities.</p>
<p>As this equipment does not require a person on-board, it means that human defectors or spies are no longer required to deliver this robotic hardware to the opposition.</p>
<p>It is impossible to know for sure when the first unmanned system was successfully captured. But because these systems rely on external radio commands and infrastructure, such as GPS, <a href="http://www.forbes.com/sites/thomasbrewster/2015/08/08/qihoo-hacks-drone-gps/">it is plausible</a> that they can be taken over and captured and it has almost certainly already happened.</p>
<p>In 2011, a US Air Force drone came down in Iran and <a href="https://www.washingtonpost.com/world/national-security/iran-says-it-downed-us-stealth-drone-pentagon-acknowledges-aircraft-downing/2011/12/04/gIQAyxa8TO_story.html">was recovered by the Iranian state</a>. That aircraft was a highly secretive <a href="http://www.af.mil/AboutUs/FactSheets/Display/tabid/224/Article/104547/rq-170-sentinel.aspx">RQ-170</a> stealth drone and the Iranians <a href="http://www.dailytech.com/Iran+Yes+We+Hacked+the+USs+Drone+and+Heres+How+We+Did+It/article23533.htm">claimed that they had “spoofed” the drone</a> into landing in Iran by creating fake GPS signals. </p>
<p>Experts in the US <a href="http://www.techworld.com/news/security/spy-drone-gps-spoofing-claims-doubted-by-security-analysts-3326032/">doubted those claims</a>, but however the drone was captured, Iran ended up with a nearly intact state-of-the-art stealth drone. </p>
<p>They put it on display to international media and stated that they would reverse engineer it and create their own version of this high-tech robotic surveillance aircraft. Iran now appears to <a href="https://theaviationist.com/2016/10/02/iran-unveils-new-ucav-modeled-on-captured-u-s-rq-170-stealth-drone/">have a squadron of these stealth drones</a>, all based on the original captured aircraft.</p>
<h2>Trusting autonomous robots</h2>
<p>An obvious way to prevent the claimed GPS-spoofing or other similar hacks is to create systems that are truly autonomous and do not require or use external communication systems. </p>
<p>Such robots should be immune to hacking once deployed on their missions. But the development and use of truly autonomous robot weapon systems is a controversial topic. </p>
<p><a href="https://www.stopkillerrobots.org/">The Campaign to Stop Killer Robots</a> was launched in 2013 to both educate the general public about the possible dangers of autonomous killer robots and to try and influence the highest-level decision makers in governments and at the United Nations that such robots should be banned.</p>
<p>The principal of the campaign is that a <a href="https://www.theguardian.com/sustainable-business/2016/dec/02/the-moral-challenge-of-military-robots-arises-when-we-delegate-fighting-wars">human should make the final decision</a> before a weapon is launched at its intended target. </p>
<p>The International Committee of the Red Cross has pointed out that the so-called “<a href="https://www.icrc.org/eng/resources/documents/audiovisuals/video/2014/rules-of-war.htm">rules of war</a>” must be coded into autonomous military robots of the future. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/HwpzzAefx9M?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The International Committee of the Red Cross video about the rules of war and future autonomous military robots.</span></figcaption>
</figure>
<p>Some robotics engineers and researchers are working on exactly this and have started to develop the algorithms that will <a href="http://www.nytimes.com/2015/01/11/magazine/death-by-robot.html">enable autonomous military robots to be ethical</a>. They propose that robots may be able to be <a href="http://www.huffingtonpost.com.au/entry/lethal-autonomous-weapons-ronald-arkin_us_574ef3bbe4b0af73af95ea36">protect civilians better than human soldiers</a>. </p>
<p>But all of this assumes that the human creators of the robots are acting ethically and want the robots to also be ethical. </p>
<p>What happens if a future autonomous soldier robot is tasked with doing something that it decides goes against its code of ethics? Will it just say “no”, or will it conclude that the most appropriate action is to turn on its owner? Would it defect to the other side? How would loyalty be built into an autonomous robot and how would the robot’s creator ensure that it could be trusted to not switch sides? </p>
<p>In the coming years you are likely to read dozens of stories about <a href="http://spectrum.ieee.org/static/special-report-trusting-robots">research into trusted autonomy</a>. It is a hot research topic and a critically important one as the world begins to outsource its fighting to robots.</p>
<p>Rogue One: A Star Wars Story may be set a long time ago in a galaxy far, far away, but its plot lines are actually based in our reality.</p>
<p>Dealing with states that build frightening new weapons, stealing plans to those weapons and then fighting back with robots is not science fiction. And it may be that soon we see those fighting robots turn on their creators.</p><img src="https://counter.theconversation.com/content/68155/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jonathan Roberts does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Rebel fighters in the latest Star Wars movie are helped by a droid that was captured from the enemy and reprogrammed. Could that happen in real life with today’s autonomous weapons?Jonathan Roberts, Professor in Robotics, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/582622016-06-16T09:57:22Z2016-06-16T09:57:22ZLosing control: The dangers of killer robots<figure><img src="https://images.theconversation.com/files/125442/original/image-20160606-13040-16h7t1k.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Should we act to prevent this from ever happening?</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/pic-312599078/stock-photo-sci-fi-fantasy-d-robot-the-killer-with-titaniumn-amor.html">Armed robot via shutterstock.com</a></span></figcaption></figure><p>New technology could lead humans to relinquish control over decisions to use lethal force. As artificial intelligence advances, the possibility that machines could independently select and fire on targets is <a href="http://futureoflife.org/open-letter-autonomous-weapons/">fast approaching</a>. Fully autonomous weapons, also known as “killer robots,” are quickly moving from the realm of science fiction toward reality.</p>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/125440/original/image-20160606-13061-7be5iw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The unmanned Sea Hunter gets underway. At present it sails without weapons, but it exemplifies the move toward greater autonomy.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Sea_Hunter_gets_underway_on_the_Willamette_River_following_a_christening_ceremony_in_Portland,_Ore._(25702146834).jpg">U.S. Navy/John F. Williams</a></span>
</figcaption>
</figure>
<p>These weapons, which could operate on land, in the air or at sea, threaten to revolutionize armed conflict and law enforcement in alarming ways. <a href="http://www.unog.ch/80256EDD006B8954/%28httpAssets%29/54B1B7A616EA1D10C1257CCC00478A59/$file/Article_Arkin_LAWS.pdf">Proponents say these killer robots are necessary</a> because modern combat moves so quickly, and because having robots do the fighting would keep soldiers and police officers out of harm’s way. But the threats to humanity would outweigh any military or law enforcement benefits. </p>
<p>Removing humans from the targeting decision would create a dangerous world. Machines would make life-and-death determinations outside of human control. The risk of disproportionate harm or erroneous targeting of civilians would increase. No person could be held responsible. </p>
<p>Given the <a href="https://www.hrw.org/sites/default/files/supporting_resources/11.2013_memo_to_ccw_delegates_fully_autonomous_weapons.pdf">moral, legal and accountability risks</a> of fully autonomous weapons, preempting their development, production and use cannot wait. The best way to handle this threat is an international, legally binding ban on weapons that lack meaningful human control.</p>
<h2>Preserving empathy and judgment</h2>
<p>At least <a href="http://www.unog.ch/80256EE600585943/%28httpPages%29/37D51189AC4FB6E1C1257F4D004CAFB2?OpenDocument">20 countries have expressed in U.N. meetings</a> the belief that humans should dictate the selection and engagement of targets. Many of them have echoed <a href="https://www.hrw.org/news/2016/04/11/killer-robots-and-concept-meaningful-human-control">arguments laid out in a new report</a>, of which I was the lead author. The report was released in April by <a href="http://www.hrw.org">Human Rights Watch</a> and the <a href="http://hrp.law.harvard.edu/">Harvard Law School International Human Rights Clinic</a>, two organizations that have been campaigning for a ban on fully autonomous weapons.</p>
<p>Retaining human control over weapons is a <a href="https://documents-dds-ny.un.org/doc/UNDOC/GEN/G13/127/76/PDF/G1312776.pdf?OpenElement">moral imperative</a>. Because they possess empathy, people can feel the emotional weight of harming another individual. Their respect for human dignity can – and should – serve as a check on killing. </p>
<p>Robots, by contrast, lack real emotions, including compassion. In addition, inanimate machines could not truly understand the value of any human life they chose to take. Allowing them to determine when to use force would undermine human dignity. </p>
<p>Human control also promotes compliance with international law, which is designed to protect civilians and soldiers alike. For example, the laws of war <a href="https://www.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=4BEBD9920AE0AEAEC12563CD0051DC9E">prohibit disproportionate attacks</a> in which expected civilian harm outweighs anticipated military advantage. Humans can apply their judgment, based on past experience and moral considerations, and make case-by-case determinations about proportionality. </p>
<p>It would be almost impossible, however, <a href="https://www.hrw.org/sites/default/files/related_material/Advancing%20the%20Debate_8May2014_Final.pdf">to replicate that judgment in fully autonomous weapons</a>, and they could not be preprogrammed to handle all scenarios. As a result, these weapons would be unable to act as “<a href="http://www.icty.org/sid/10052">reasonable commanders</a>,” the traditional legal standard for handling complex and unforeseeable situations. </p>
<p>In addition, the loss of human control would threaten a target’s <a href="https://www.hrw.org/sites/default/files/reports/arms0514_ForUpload_0.pdf">right not to be arbitrarily deprived of life</a>. Upholding this fundamental human right is an obligation during law enforcement as well as military operations. Judgment calls are required to assess the necessity of an attack, and humans are better positioned than machines to make them.</p>
<h2>Promoting accountability</h2>
<p>Keeping a human in the loop on decisions to use force further ensures that <a href="https://www.hrw.org/sites/default/files/reports/arms0415_ForUpload_0.pdf">accountability for unlawful acts</a> is possible. Under international criminal law, a human operator would in most cases escape liability for the harm caused by a weapon that acted independently. Unless he or she intentionally used a fully autonomous weapon to commit a crime, it would be unfair and legally problematic to hold the operator responsible for the actions of a robot that the operator could neither prevent nor punish.</p>
<p>There are additional obstacles to finding programmers and manufacturers of fully autonomous weapons liable under civil law, in which a victim files a lawsuit against an alleged wrongdoer. The United States, for example, establishes <a href="https://supreme.justia.com/cases/federal/us/487/500/case.html">immunity for most weapons manufacturers</a>. It also has high standards for proving a product was defective in a way that would make a manufacturer legally responsible. In any case, victims from other countries would likely lack the access and money to sue a foreign entity. The gap in accountability would weaken deterrence of unlawful acts and leave victims unsatisfied that someone was punished for their suffering. </p>
<h2>An opportunity to seize</h2>
<p>At a U.N. meeting in Geneva in April, <a href="http://www.reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2016/meeting-experts-laws/documents/DraftRecommendations_15April_final.pdf">94 countries recommended beginning formal discussions</a> about “lethal autonomous weapons systems.” The talks would consider whether these systems should be restricted under the <a href="http://www.unog.ch/80256EE600585943/%28httpPages%29/4F0DEF093B4860B4C1257180004B1B30?OpenDocument">Convention on Conventional Weapons</a>, a disarmament treaty that has regulated or banned several other types of weapons, including incendiary weapons and blinding lasers. The nations that have joined the treaty will meet in December for a review conference to set their agenda for future work. It is crucial that the members agree to start a formal process on lethal autonomous weapons systems in 2017.</p>
<p>Disarmament law provides precedent for requiring human control over weapons. For example, the international community adopted the widely accepted treaties banning <a href="https://www.icrc.org/ihl/INTRO/450?OpenDocument">biological weapons</a>, <a href="https://www.icrc.org/ihl/INTRO/553?OpenDocument">chemical weapons</a> and <a href="https://www.icrc.org/ihl/INTRO/580">landmines</a> in large part because of humans’ inability to exercise adequate control over their effects. Countries should now prohibit fully autonomous weapons, which would pose an equal or greater humanitarian risk.</p>
<p>At the December review conference, countries that have joined the Convention on Conventional Weapons should take concrete steps toward that goal. They should initiate negotiations of a new international agreement to address fully autonomous weapons, moving beyond general expressions of concern to specific action. They should set aside enough time in 2017 – at least several weeks – for substantive deliberations.</p>
<p>While the process of creating international law is notoriously slow, countries can move quickly to address the threats of fully autonomous weapons. They should seize the opportunity presented by the review conference because the alternative is unacceptable: Allowing technology to outpace diplomacy would produce dire and unparalleled humanitarian consequences.</p><img src="https://counter.theconversation.com/content/58262/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bonnie Docherty works as a senior researcher in the Arms Division of Human Rights Watch. </span></em></p>Machines that can target and kill people without human intervention or accountability pose a moral threat to the world.Bonnie Docherty, Lecturer on Law, Senior Clinical Instructor at Harvard Law School's International Human Rights Clinic, Harvard UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/577342016-04-18T00:48:34Z2016-04-18T00:48:34ZWorld split on how to regulate ‘killer robots’<figure><img src="https://images.theconversation.com/files/118676/original/image-20160414-4703-3gu3p.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">DARPA is developing an autonomous anti-submarine warfare vessel, ACTUV.</span> <span class="attribution"><span class="source">DARPA</span></span></figcaption></figure><p>Diplomats from around the world met in Geneva last week for the United Nations’ third <a href="http://www.unog.ch/80256EE600585943/%28httpPages%29/37D51189AC4FB6E1C1257F4D004CAFB2?OpenDocument">Informal Expert Meeting</a> on lethal autonomous weapons systems (<a href="https://theconversation.com/au/topics/lethal-autonomous-weapons-systems">LAWS</a>), commonly dubbed “killer robots”. </p>
<p>Their aim was to make progress on deciding how, or if, LAWS should be regulated under <a href="https://www.icrc.org/eng/assets/files/other/what_is_ihl.pdf">international humanitarian law</a>. </p>
<p>A range of views were expressed at the meeting, from Pakistan being in favour of a full ban, to the UK favouring no new regulation for LAWS, and several positions in between. </p>
<p>Despite the range of views on offer, there was some common ground. </p>
<p>It is generally agreed that LAWS are governed by international humanitarian law. For example, robots cannot ignore the principles of <a href="https://www.icrc.org/customary-ihl/eng/docs/v1_cha_chapter1_rule1">distinction between civilians and combatants</a>, or <a href="https://www.icrc.org/customary-ihl/eng/docs/v1_cha_chapter4_rule14">proportionality in the scale of attack</a>. </p>
<p>Human commanders would also have command responsibility for their robots, just as they do for their service men and women. Robots cannot be lawfully used to perpetrate genocide, massacres and war crimes.</p>
<p>Beyond that, there are broadly four positions that the various nations took.</p>
<h3>Position 1: Rely on existing laws</h3>
<p>The UK’s position is that existing international humanitarian law is sufficient to regulate emerging technologies in artificial intelligence (<a href="https://theconversation.com/au/topics/artificial-intelligence">AI</a>) and <a href="https://theconversation.com/au/topics/robotics">robotics</a>. </p>
<p>The argument is that international humanitarian law was sufficient to regulate aeroplanes and submarines when they emerged, and it will also cope with many kinds of LAWS too. This would include <a href="http://www.af.mil/AboutUs/FactSheets/Display/tabid/224/Article/104469/mq-1b-predator.aspx">Predator drones</a> with an “ethical governor” – which is software designed to determine whether a strike conforms with the specified rules of engagement and international humanitarian law – or autonomous anti-submarine warfare ships, such as the US Navy’s experimental autonomous <a href="http://www.darpa.mil/program/anti-submarine-warfare-continuous-trail-unmanned-vessel">Sea Hunter</a>.</p>
<h3>Position 2: Ban machine learning</h3>
<p>The French delegation said a ban would be “premature” and that they are open to accepting the legality of an “<a href="https://theconversation.com/we-need-to-keep-humans-in-the-loop-when-robots-fight-wars-53641">off the loop</a>” LAWS with a “human in the wider loop”. This means the machine can select targets and fire autonomously, but humans still set the rules of engagement. </p>
<p>However, they were open to regulating <a href="https://developer.nvidia.com/deep-learning">machine learning</a> in “off the loop” LAWS (which do not yet exist). Thus, they might support a future ban on any self-learning AI – similar to <a href="https://theconversation.com/ai-has-beaten-us-at-go-so-what-next-for-humanity-55945">AlphaGo</a>, which recently beat the human world Go champion – in direct control of missiles without humans in the wider loop. The main concern is that such AIs might be unpredictable.</p>
<h3>Position 3: Ban ‘off the loop’ with a ‘human in the wider loop’</h3>
<p>The Dutch and Swiss delegations suggested “off the loop” systems with a “human in the wider loop” could comply with international humanitarian law, exhibit sufficiently meaningful human control and meet the dictates of the public conscience. </p>
<p>The UK, France and Canada spoke against a ban on such systems. </p>
<p>Advocates of such robotic weapons claim they could be morally superior to human soldiers because they would be more accurate, more precise and less prone to bad decisions caused by panic or revenge. </p>
<p>Opponents argue they could mistarget in cluttered or occluded environments and are morally unacceptable.</p>
<p>For example, the <a href="http://www.stopkillerrobots.org/2016/04/thirdmtg/">Holy See and 13 other nations</a> think a real-time human intervention in the decision to take life is morally required, so there must always be a human in the loop. </p>
<p>This position requires exceptions for already fielded “defensive” weapons such as the <a href="http://www.raytheon.com.au/capabilities/products/phalanx/">Phalanx Close-In Weapon System</a>, and long-accepted “off the loop” weapons such as naval mines, which have existed since the 1860s.</p>
<h3>Position 4: Ban ‘in the loop’ weapons</h3>
<p>Pakistan and Palestine will support any measure broad enough to ban telepiloted drones. However, most nations see this as beyond the scope of the LAWS debate, as humans make the decisions to select and engage targets, even though many agree drones are a human rights disaster.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/119015/original/image-20160418-11170-1x1084d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/119015/original/image-20160418-11170-1x1084d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/119015/original/image-20160418-11170-1x1084d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=429&fit=crop&dpr=1 600w, https://images.theconversation.com/files/119015/original/image-20160418-11170-1x1084d.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=429&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/119015/original/image-20160418-11170-1x1084d.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=429&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/119015/original/image-20160418-11170-1x1084d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=539&fit=crop&dpr=1 754w, https://images.theconversation.com/files/119015/original/image-20160418-11170-1x1084d.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=539&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/119015/original/image-20160418-11170-1x1084d.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=539&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Northrop Grumman X-47A Pegasus drone is being trialed by the US Navy.</span>
<span class="attribution"><span class="source">DARPA</span></span>
</figcaption>
</figure>
<h2>Defining lines in terms of Turing</h2>
<p>Formally, an AI is a <a href="https://theconversation.com/alan-turings-legacy-is-even-bigger-than-we-realise-34735">Turing machine</a> that mechanically applies rules to symbolic inputs to generate outputs.</p>
<p>A ban on machine learning LAWS is a ban on AIs that update their own rule book for making lethal decisions. A ban on “wider loop” LAWS is a ban on AIs with a human-written rule book making lethal decisions. A ban on “in the loop” LAWS is a ban on robots being piloted by humans being used as weapons at all. </p>
<p>Opinions also differ as to whether control of decisions by Turing computation qualify as meaningful or human. </p>
<h2>Next steps</h2>
<p>The Geneva meeting was an informal expert meeting to clarify definitions and gain consensus on what (if anything) might be banned or regulated in a treaty. As such, there were no votes on treaty wording.</p>
<p>The most likely outcome is the setup of a panel of government experts to continue discussions. AI, robotics and LAWS are still being developed. As things stand, the world is at Position 1: relying on existing international humanitarian law. </p>
<p>Provided an AlphaGo in charge of missiles complied with principles like discrimination and proportionality, it would not be clearly illegal, just arguably so.</p><img src="https://counter.theconversation.com/content/57734/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sean Welsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The future of warfare may include many lethal autonomous weapons, but the world can’t decide how, or if, to regulate them.Sean Welsh, Doctoral Candidate in Robot Ethics, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/577352016-04-13T20:13:31Z2016-04-13T20:13:31ZAustralia should take a stand against ‘killer robots’<figure><img src="https://images.theconversation.com/files/118514/original/image-20160413-18093-1d4vgmm.png?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Could killer robots like Maximilian from the 1979 film Black Hole become reality?</span> <span class="attribution"><span class="source">Walt Disney Productions</span></span></figcaption></figure><p>Lethal autonomous weapons (or <a href="https://theconversation.com/au/topics/killer-robots">killer robots</a> as the media likes to call them) are the subject of intense discussion in the corridors and committee rooms of the United Nations in Geneva this week.</p>
<p>The international talking shop is playing host to the <a href="http://goo.gl/JU1YJC">third round of multilateral talks</a> on this topic.</p>
<p>The meeting follows on from increasing concerns about the rapid progress being made in areas like artificial intelligence (<a href="https://theconversation.com/au/topics/artificial-intelligence">AI</a>) and <a href="https://theconversation.com/au/topics/robotics">robotics</a>. <a href="http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/">Stephen Hawking, Elon Musk, Bill Gates</a> and others have expressed concern about the direction these technologies may be taking us.</p>
<p>Last July, thousands of researchers working in AI and robotics came together and issued an open letter calling upon the UN to put a <a href="https://theconversation.com/open-letter-we-must-stop-killer-robots-before-they-are-built-44577">pre-emptive ban</a> in place on such weapons.</p>
<p>In the interests of disclosure, I helped put the letter together and will be <a href="http://www.stopkillerrobots.org/wp-content/uploads/2013/03/KRC_SideEventCCW_14April2016rv.pdf">talking at the UN</a> meeting on Thursday.</p>
<h2>Where will this end?</h2>
<p>If we don’t get a ban in place, the end point is clear to my colleagues and me: there will be an arms race and it will look much like the dystopian future painted by Hollywood movies like the Terminator series.</p>
<p>The technology will undoubtably fall into the hands of terrorists and rogue nations. These people will have no qualms about removing any safeguards in place on its use. Or using it against us.</p>
<p>Unfortunately, we won’t simply have robots fight robots. Wars today are asymmetric and it will be robots against humans. Any many of those humans will be innocent civilians.</p>
<p>This is a terrifying prospect.</p>
<h2>We don’t need to end there</h2>
<p>The world has come together in the past to decide not to weaponise a technology. We have bans on biological and chemical weapons. We have treaties to prevent the proliferation of nuclear weapons.</p>
<p>Most recently, we have collectively agreed to ban several technologies including <a href="https://www.icrc.org/ihl/INTRO/570">blinding lasers</a> and <a href="https://www.icrc.org/ihl/INTRO/580">anti-personnel mines</a>.</p>
<p>And whilst these bans have not been 100% effective, the world is undoubtedly a better place for their existence.</p>
<p>The treaties have also not prevented related technologies from being developed; you go into a hospital, and a “blinding” laser will be used to fix your eyes. But if you go to the battlefields of the world today, you will not find blinding lasers being used. And no arms company today will sell you one.</p>
<p>The same is likely to be true for autonomous weapons. We won’t stop the development of the broad technology. It’s much the same that will go into an autonomous car as an autonomous drone or submarine. </p>
<p>And we’ll definitely want <a href="https://theconversation.com/au/topics/autonomous-vehicles">autonomous cars</a>. One thousand people will die on the roads of Australia this year. These numbers will plummet once we have autonomous cars. Most accidents are the result of driver error.</p>
<p>But if we get an UN ban in place, we’ll not have autonomous weapons out in the battlefield. And this will be a good thing.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=429&fit=crop&dpr=1 600w, https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=429&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=429&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=539&fit=crop&dpr=1 754w, https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=539&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/118515/original/image-20160413-18132-115m7wi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=539&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The General Atomics MQ-9 Reaper is already semi-autonomous, and similar combat aircraft could soon be fully autonomous.</span>
<span class="attribution"><a class="source" href="http://www.af.mil/shared/media/photodb/photos/071127-F-2185F-105.jpg">USAF Photographic Archives</a></span>
</figcaption>
</figure>
<h2>Come on Australia</h2>
<p>Australia has led the world in many discussions around disarmament. For instance, we have taken a leading role in <a href="http://www.defence.gov.au/foi/docs/disclosures/421_1213_Documents.pdf">nuclear non-proliferation</a>.</p>
<p>But we have taken a disappointing role so far in the UN discussions around autonomous weapons. Our <a href="http://goo.gl/LNoccK">official position</a> appears welcoming.</p>
<blockquote>
<p>The development of fully autonomous systems able to conduct military targeting operations which kill and injure combatants or civilians may be closer than many of us had imagined. It is an appropriate time to consider the risks of such weapon systems and to make sure we understand fully what might constitute misuse as well as legitimate use of emerging technologies.</p>
</blockquote>
<p>However, we are not helping the discussion with official statements like the following.</p>
<blockquote>
<p>If we were to settle, ultimately, on an agreement that there were limits to the autonomy that lethal weapons may possess, or that there were limits to the weaponisation of autonomous systems, we would also have to design ways, not just of defining, but of implementing, such limits, and of verifying compliance. We should not underestimate the complexity of this task.</p>
</blockquote>
<p>This is not just unhelpful but also wrong. There is no necessity to define ways to verify compliance. Almost no weapon banned by the UN has a compliance regime.</p>
<p>There is no international body to inspect for blinding lasers. Or anti-personnel mines. Even the grand-daddy of all weapon bans, the <a href="https://en.wikipedia.org/wiki/Biological_Weapons_Convention">1975 UN convention on biological weapons</a>, has no formal compliance measures beyond self-reporting by nation states and investigation by the UN Security Council (which has never occurred).</p>
<p>There is also no necessity to define limits on autonomy. For example, the 1998 UN Protocol on Blinding Laser Weapon does not formally define a limit on the wavelength or wattage of a “blinding” laser.</p>
<p>We can simply require that autonomous or semi-autonomous weapons must have “meaningful” human control. And depend on the consensus that will undoubtably emerge internationally as to what precisely this means.</p>
<h2>Let’s take the lead</h2>
<p>Australia is a world super power in AI and robotics. We punch well above our weight. We have some of the most automated ports and mines in the world. And we are currently reigning <a href="https://theconversation.com/how-we-won-the-world-robot-soccer-championship-45156">world champions at robot soccer</a>. Indeed, we have been world champions, so far, five times.</p>
<p>And from the reaction I have had <a href="http://www.abc.net.au/radionational/programs/bigideas/killer-robots/7266930">talking about this issue</a> in public, the general population here in Australia supports the view held by both me and thousands of my colleagues that a ban would be a good idea.</p>
<p>All technology can be used for good or bad. Australia should be taking a lead in pushing the world down a good path.</p><img src="https://counter.theconversation.com/content/57735/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh is co-authored the open letter calling for a ban on "killer robots" and is speaking in Geneva this week in support of a ban.</span></em></p>We need to ban lethal autonomous weapons, or “killer robots”, as we have done with biological weapons, land mines and blinding lasers, and Australia should take a leading role in making that happen.Toby Walsh, Professor of AI at UNSW, Research Group Leader, Data61Licensed as Creative Commons – attribution, no derivatives.