tag:theconversation.com,2011:/global/topics/killer-robots-18966/articles
Killer robots – The Conversation
2023-12-08T05:11:49Z
tag:theconversation.com,2011:article/219302
2023-12-08T05:11:49Z
2023-12-08T05:11:49Z
Israel’s AI can produce 100 bombing targets a day in Gaza. Is this the future of war?
<p>Last week, reports emerged that the Israel Defense Forces (IDF) are using an artificial intelligence (AI) system <a href="https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/">called Habsora</a> (Hebrew for “The Gospel”) to select targets in the war on Hamas in Gaza. The system has reportedly been used to <a href="https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets">find more targets for bombing</a>, to link locations to Hamas operatives, and to estimate likely numbers of civilian deaths in advance.</p>
<p>What does it mean for AI targeting systems like this to be used in conflict? My research into the social, political and ethical implications of military use of remote and autonomous systems shows AI is already altering the character of war. </p>
<p>Militaries use remote and autonomous systems as “force multipliers” to increase the impact of their troops and protect their soldiers’ lives. AI systems can make soldiers more efficient, and are likely to enhance the speed and lethality of warfare – even as humans become less visible on the battlefield, instead gathering intelligence and targeting from afar. </p>
<p>When militaries can kill at will, with little risk to their own soldiers, will the current ethical thinking about war prevail? Or will the increasing use of AI also increase the dehumanisation of adversaries and the disconnect between wars and the societies in whose names they are fought?</p>
<h2>AI in war</h2>
<p>AI is having an impact at all levels of war, from “intelligence, surveillance and reconnaissance” support, like the IDF’s Habsora system, through to “lethal autonomous weapons systems” that can choose and attack targets <a href="https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems">without human intervention</a>.</p>
<p>These systems have the potential to reshape the character of war, making it easier to enter into a conflict. As complex and distributed systems, they may also make it more difficult to signal one’s intentions – or interpret those of an adversary – in the context of an escalating conflict.</p>
<p>To this end, AI can <a href="https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning">contribute to mis- or disinformation</a>, creating and amplifying dangerous misunderstandings in times of war. </p>
<p>AI systems may increase the human tendency to trust suggestions from machines (this is highlighted by the Habsora system, named after the infallible word of God), opening up uncertainty over <a href="https://www.tandfonline.com/doi/abs/10.1080/15027570.2018.1481907">how far to trust</a> autonomous systems. The boundaries of an AI system that interacts with other technologies and with people may not be clear, and there may be <a href="https://www.jstor.org/stable/j.ctv11g97wm">no way to know who or what has “authored” its outputs</a>, no matter how objective and rational they may seem.</p>
<h2>High-speed machine learning</h2>
<p>Perhaps one of the most basic and important changes we are likely to see driven by AI is an increase in the speed of warfare. This may change how we understand <a href="https://www.rand.org/pubs/research_reports/RR2797.html">military deterrence</a>, which assumes humans are the primary actors and sources of intelligence and interaction in war.</p>
<p>Militaries and soldiers frame their decision-making through what is called the “<a href="https://fhs.brage.unit.no/fhs-xmlui/bitstream/handle/11250/2683228/Boyds%20OODA%20Loop%20Necesse%20vol%205%20nr%201.pdf?sequence=1&isAllowed=y">OODA loop</a>” (for observe, orient, decide, act). A faster OODA loop can help you outmanoeuvre your enemy. The goal is to avoid slowing down decisions through excessive deliberation, and instead to match the accelerating tempo of war.</p>
<p>So the use of AI is potentially justified on the basis it can interpret and synthesise huge amounts of data, processing it and delivering outputs at rates that far surpass human cognition. </p>
<p>But where is the space for ethical deliberation in an increasingly fast and data-centric OODA loop cycle happening at a safe distance from battle?</p>
<p>Israel’s targeting software is an example of this acceleration. A former head of the IDF has <a href="https://www.ynetnews.com/magazine/article/ry0uzlhu3">said</a> that human intelligence analysts might produce 50 bombing targets in Gaza each year, but the Habsora system can produce 100 targets a day, along with real-time recommendations for which ones to attack.</p>
<p>How does the system produce these targets? It does so through probabilistic reasoning offered by machine learning algorithms.</p>
<p>Machine learning algorithms learn through data. They learn by seeking patterns in huge piles of data, and their success is contingent on the data’s quality and quantity. They make recommendations based on probabilities. </p>
<p>The probabilities are based on pattern-matching. If a person has enough similarities to other people labelled as an enemy combatant, they too may be labelled a combatant themselves. </p>
<h2>The problem of AI enabled targeting at a distance</h2>
<p>Some claim machine learning enables <a href="https://philpapers.org/rec/ARKTCF">greater precision in targeting</a>, which makes it easier to avoid harming innocent people and using a proportional amount of force. However, the idea of more precise targeting of airstrikes has not been successful in the past, as the high toll of <a href="https://airwars.org/">declared and undeclared civilian casualties</a> from the global war on terror shows. </p>
<p>Moreover, the difference between a combatant and a civilian is <a href="https://philpapers.org/rec/WILSAU">rarely self-evident</a>. Even humans frequently cannot tell who is and is not a combatant.</p>
<p>Technology does not change this fundamental truth. Often social categories and concepts are not objective, but are contested or specific to time and place. But computer vision together with algorithms are more effective in predictable environments where concepts are objective, reasonably stable, and internally consistent. </p>
<h2>Will AI make war worse?</h2>
<p>We live in a time of <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-8497.2005.00393.x">unjust wars</a> and military occupations, egregious <a href="https://www.defence.gov.au/about/reviews-inquiries/afghanistan-inquiry">violations of the rules of engagement</a>, and an incipient <a href="https://www.nytimes.com/2023/03/25/world/asia/asia-china-military-war.html">arms race</a> in the face of US–China rivalry. In this context, the inclusion of AI in war may add new complexities that exacerbate, rather than prevent, harm. </p>
<p>AI systems make it easier for actors in war to <a href="https://doi.org/10.48550/arXiv.1802.07228">remain anonymous</a>, and can render invisible the source of violence or the decisions which lead to it. In turn, we may see increasing disconnection between militaries, soldiers, and civilians, and the wars being fought in the name of the nation they serve.</p>
<p>And as AI grows more common in war, militaries will develop countermeasures to undermine it, creating a loop of escalating militarisation. </p>
<h2>What now?</h2>
<p>Can we control AI systems to head off a future in which warfare is driven by increasing reliance on technology underpinned by learning algorithms? Controlling AI development in any area, particularly via laws and regulations, has proven difficult.</p>
<p>Many suggest we need better laws to account for systems underpinned by machine learning, but even this is not straightforward. Machine learning algorithms are <a href="https://www.technologyreview.com/2017/06/01/151447/secret-algorithms-threaten-the-rule-of-law/">difficult to regulate</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/us-military-plans-to-unleash-thousands-of-autonomous-war-robots-over-next-two-years-212444">US military plans to unleash thousands of autonomous war robots over next two years</a>
</strong>
</em>
</p>
<hr>
<p>AI-enabled weapons may program and update themselves, evading legal requirements for certainty. The engineering maxim “software is never done” implies that the law may never match the speed of technological change.</p>
<p>The quantitative act of estimating likely numbers of civilian deaths in advance, which the Habsora system does, does not tell us much about the qualitative dimensions of targeting. Systems like Habsora in isolation cannot really tell us much about whether a strike would be ethical or legal (that is, whether it is proportionate, discriminate and necessary, among other considerations).</p>
<p>AI should support democratic ideals, not undermine them. Trust in governments, institutions, and militaries <a href="https://www.un.org/development/desa/dspd/2021/07/trust-public-institutions/">is eroding</a> and needs to be restored if we plan to apply AI across a range of military practices. We need to deploy critical ethical and political analysis to interrogate emerging technologies and their effects so any form of military violence is considered to be the last resort.</p>
<p>Until then, machine learning algorithms are best kept separate from targeting practices. Unfortunately, the world’s armies are heading in the opposite direction.</p><img src="https://counter.theconversation.com/content/219302/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bianca Baggiarini does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
AI systems will accelerate the pace of war.
Bianca Baggiarini, Lecturer, Australian National University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/198725
2023-02-21T13:24:17Z
2023-02-21T13:24:17Z
War in Ukraine accelerates global drive toward killer robots
<figure><img src="https://images.theconversation.com/files/510915/original/file-20230217-593-z3je8t.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4021%2C2924&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It wouldn't take much to turn this remotely operated mobile machine gun into an autonomous killer robot.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Marine_Corps_Warfighting_Laboratory_MAGTAF_Integrated_Experiment_(MCWL)_160709-M-OB268-165.jpg">Pfc. Rhita Daniel, U.S. Marine Corps</a></span></figcaption></figure><p>The U.S. military is intensifying its commitment to the development and use of autonomous weapons, as confirmed by an update to a <a href="https://www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/">Department of Defense directive</a>. The update, released Jan. 25, 2023, is the first in a decade to focus on artificial intelligence autonomous weapons. It follows a related <a href="https://www.nato.int/cps/en/natohq/official_texts_208376.htm">implementation plan</a> released by NATO on Oct. 13, 2022, that is aimed at preserving the alliance’s “technological edge” in what are sometimes called “killer robots.” </p>
<p>Both announcements reflect a crucial lesson militaries around the world have learned from recent combat operations in <a href="https://www.pbs.org/newshour/world/drone-advances-amid-war-in-ukraine-could-bring-fighting-robots-to-front-lines#:%7E:text=Utah%2Dbased%20Fortem%20Technologies%20has,them%20%E2%80%94%20all%20without%20human%20assistance.">Ukraine</a> and <a href="https://foreignpolicy.com/2021/03/30/army-pentagon-nagorno-karabakh-drones/">Nagorno-Karabakh</a>: Weaponized artificial intelligence is the future of warfare.</p>
<p>“We know that commanders are seeing a military value in loitering munitions in Ukraine,” Richard Moyes, director of <a href="https://article36.org/">Article 36</a>, a humanitarian organization focused on reducing harm from weapons, told me in an interview. These weapons, which are a cross between a bomb and a drone, can hover for extended periods while waiting for a target. For now, such semi-autonomous missiles are generally being operated with significant human control over key decisions, he said. </p>
<h2>Pressure of war</h2>
<p>But as casualties mount in Ukraine, so does the pressure to achieve decisive battlefield advantages with fully autonomous weapons – robots that can choose, hunt down and attack their targets all on their own, without needing any human supervision.</p>
<p>This month, a key Russian manufacturer <a href="https://www.defenseone.com/technology/2023/01/russian-robot-maker-working-bot-target-abrams-leopard-tanks/382288/">announced plans</a> to develop a new combat version of its Marker reconnaissance robot, an uncrewed ground vehicle, to augment existing forces in Ukraine. Fully autonomous drones are already being used to <a href="https://www.forbes.com/sites/katyasoldak/2023/01/27/friday-january-27-russias-war-on-ukraine-daily-news-and-information-from-ukraine/">defend Ukrainian energy facilities</a> from other drones. Wahid Nawabi, CEO of the U.S. defense contractor that manufactures the semi-autonomous <a href="https://www.avinc.com/tms/switchblade">Switchblade drone</a>, said the technology is <a href="https://apnews.com/article/russia-ukraine-war-drone-advances-6591dc69a4bf2081dcdd265e1c986203">already within reach</a> to convert these weapons to become fully autonomous. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1446461845070549008"}"></div></p>
<p>Mykhailo Fedorov, Ukraine’s digital transformation minister, has argued that fully autonomous weapons are the war’s “<a href="https://abcnews.go.com/Technology/wireStory/drone-advances-ukraine-bring-dawn-killer-robots-96112651">logical and inevitable next step</a>” and recently said that soldiers might see them on the battlefield in the next six months. </p>
<p>Proponents of fully autonomous weapons systems <a href="https://news.northeastern.edu/2019/11/15/autonomous-weapons-systems-the-utilize-artificial-intelligence-are-changing-the-nature-of-warfare-but-theres-a-problem/#_ga=2.7414138.976428111.1676666580-169995920.1676666580">argue that the technology will keep soldiers out of harm’s way</a> by keeping them off the battlefield. They will also allow for military decisions to be made at superhuman speed, allowing for radically improved defensive capabilities. </p>
<p>Currently, semi-autonomous weapons, like loitering munitions that track and detonate themselves on targets, require a “human in the loop.” They can recommend actions but require their operators to initiate them. </p>
<p>By contrast, fully autonomous drones, like the so-called “<a href="https://fortemtech.com/products/dronehunter-f700/">drone hunters</a>” now <a href="https://u24.gov.ua/news/shahed_hunters_defenders">deployed in Ukraine</a>, can track and disable incoming unmanned aerial vehicles day and night, with no need for operator intervention and faster than human-controlled weapons systems. </p>
<h2>Calling for a timeout</h2>
<p>Critics like <a href="https://www.stopkillerrobots.org/stop-killer-robots/facts-about-autonomous-weapons/">The Campaign to Stop Killer Robots</a> have been advocating for more than a decade to ban research and development of autonomous weapons systems. They point to a future where autonomous weapons systems are designed specifically to target humans, not just vehicles, infrastructure and other weapons. They argue that wartime decisions over life and death must remain in human hands. Turning them over to an algorithm amounts to the ultimate form of <a href="https://www.stopkillerrobots.org/stop-killer-robots/digital-dehumanisation/">digital dehumanization</a>.</p>
<p>Together with <a href="https://www.hrw.org/topic/arms/killer-robots">Human Rights Watch</a>, The Campaign to Stop Killer Robots argues that autonomous weapons systems lack the human judgment necessary to distinguish between civilians and legitimate military targets. They also lower the threshold to war by reducing the perceived risks, and they erode meaningful human control over what happens on the battlefield. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a soldier crouches on the ground peering into a black box as to small projectiles with wings are launched from tubes on either side of him" src="https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=662&fit=crop&dpr=1 600w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=662&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=662&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=831&fit=crop&dpr=1 754w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=831&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/510910/original/file-20230217-18-gpr6qw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=831&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This composite image shows a ‘Switchblade’ loitering munition drone launching from a tube and extending its folded wings.</span>
<span class="attribution"><a class="source" href="https://madsciblog.tradoc.army.mil/wp-content/uploads/2021/06/Switchblade.jpg">U.S. Army AMRDEC Public Affairs</a></span>
</figcaption>
</figure>
<p>The organizations argue that the militaries <a href="https://research.northeastern.edu/autonomous-weapons-systems-the-utilize-artificial-intelligence-are-changing-the-nature-of-warfare-but-theres-a-problem-2/#:%7E:text=They%20found%20that%20there%20are,dollars%20into%20this%20arms%20race.">investing most heavily</a> in autonomous weapons systems, including the U.S., Russia, China, South Korea and the European Union, are launching the world into a costly and destabilizing new arms race. One consequence could be this dangerous new technology falling into the <a href="https://www.brookings.edu/wp-content/uploads/2021/11/FP_20211122_ai_nonstate_actors_kreps.pdf">hands of terrorists and others outside of government control</a>.</p>
<p>The updated Department of Defense directive tries to address some of the key concerns. It declares that the U.S. will use autonomous weapons systems with “<a href="https://www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/">appropriate levels of human judgment over the use of force</a>.” Human Rights Watch <a href="https://www.hrw.org/news/2023/02/14/review-2023-us-policy-autonomy-weapons-systems">issued a statement</a> saying that the new directive fails to make clear what the phrase “appropriate level” means and doesn’t establish guidelines for who should determine it.</p>
<p>But as Gregory Allen, an expert from the national defense and international relations think tank <a href="https://www.csis.org/">Center for Strategic and International Studies</a>, argues, this language <a href="https://www.forbes.com/sites/davidhambling/2023/01/31/what-is-the-pentagons-updated-policy-on-killer-robots/">establishes a lower threshold</a> than the “meaningful human control” demanded by critics. The Defense Department’s wording, he points out, allows for the possibility that in certain cases, such as with surveillance aircraft, the level of human control considered appropriate “may be little to none.” </p>
<p>The updated directive also includes language promising ethical use of autonomous weapons systems, specifically by establishing a system of oversight for developing and employing the technology, and by insisting that the weapons will be used in accordance with existing international laws of war. But Article 36’s Moyes noted that international law currently does not provide an adequate framework for understanding, much less regulating, the concept of weapon autonomy. </p>
<p>The current legal framework does not make it clear, for instance, that commanders are responsible for understanding what will trigger the systems that they use, or that they must limit the area and time over which those systems will operate. “The danger is that there is not a bright line between where we are now and where we have accepted the unacceptable,” said Moyes.</p>
<h2>Impossible balance?</h2>
<p>The Pentagon’s update demonstrates a simultaneous commitment to deploying autonomous weapons systems and to complying with international humanitarian law. How the U.S. will balance these commitments, and if such a balance is even possible, remains to be seen. </p>
<p>The International Committee of the Red Cross, the custodian of international humanitarian law, insists that the legal obligations of commanders and operators “<a href="https://www.icrc.org/en/document/reflections-70-years-geneva-conventions-and-challenges-ahead">cannot be transferred to a machine, algorithm or weapon system</a>.” Right now, human beings are held responsible for protecting civilians and limiting combat damage by making sure the use of force is proportional to military objectives.</p>
<p>If and when artificially intelligent weapons are deployed on the battlefield, who should be held responsible when needless civilian deaths occur? There isn’t a clear answer to that very important question.</p><img src="https://counter.theconversation.com/content/198725/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>I am not connected to Article 36 in any capacity, nor have I received any funding from them. I did write a short opinion/policy piece on AWS that was posted on their website.</span></em></p>
The technology exists to build autonomous weapons. How well they would work and whether they could be adequately controlled are unknown. The Ukraine war has only turned up the pressure.
James Dawes, Professor of English, Macalester College
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/185243
2023-01-10T04:19:00Z
2023-01-10T04:19:00Z
What killer robots mean for the future of war
<figure><img src="https://images.theconversation.com/files/501876/original/file-20221219-26-phauot.jpg?ixlib=rb-1.1.0&rect=45%2C27%2C5961%2C3980&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Killer robots don't look like this, for now</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/automatic-robot-hand-killing-people-649198981">Denis Starostin/Shutterstock</a></span></figcaption></figure><p>You might have heard of killer robots, slaughterbots or terminators – officially called lethal autonomous weapons (LAWs) – from films and books. And the idea of super-intelligent weapons running rampant is still science fiction. But as AI weapons become increasingly sophisticated, <a href="https://www.hrw.org/news/2021/02/02/killer-robots-survey-shows-opposition-remains-strong">public concern is growing</a> over fears about lack of accountability and the risk of technical failure.</p>
<hr>
<iframe id="noa-web-audio-player" style="border: none" src="https://embed-player.newsoveraudio.com/v4?key=x84olp&id=https://theconversation.com/what-killer-robots-mean-for-the-future-of-war-185243&bgColor=F5F5F5&color=D8352A&playColor=D8352A" width="100%" height="110px"></iframe>
<p><em>You can listen to more articles from The Conversation, narrated by Noa, <a href="https://theconversation.com/us/topics/audio-narrated-99682">here</a>.</em></p>
<hr>
<p>Already we have seen how so-called neutral AI have made <a href="https://ssir.org/articles/entry/when_good_algorithms_go_sexist_why_and_how_to_advance_ai_gender_equity">sexist algorithms</a> and <a href="https://www.theverge.com/2017/4/4/15177512/google-youtube-content-ai-fooled-tricked">inept content moderation systems</a>, largely because their creators did not understand the technology. But in war, these kinds of misunderstandings could kill civilians or wreck negotiations. </p>
<p>For example, a target recognition algorithm could be trained to identify tanks from satellite imagery. But what if all of the images used to train the system featured soldiers in formation around the tank? It might mistake a civilian vehicle passing through a military blockade for a target. </p>
<h2>Why do we need autonomous weapons?</h2>
<p>Civilians in many countries (such as <a href="https://www.history.com/topics/vietnam-war/agent-orange-1">Vietnam</a>, <a href="https://www.indiatoday.in/magazine/international/story/19830731-us-accuses-soviet-union-of-using-chemical-and-biological-weapons-in-afghanistan-770879-2013-07-19">Afghanistan</a> and <a href="http://www.stopclustermunitions.org/en-gb/cluster-bombs/use-of-cluster-bombs/in-yemen.aspx">Yemen</a>) have suffered because of the way global superpowers build and use increasingly advanced weapons. Many people would argue they have done more harm than good, most recently pointing to the <a href="https://fortune.com/2022/03/01/russia-ukraine-invasion-war-a-i-artificial-intelligence/">Russian invasion of Ukraine</a> early in 2022. </p>
<p>In the other camp are people who say a country must be able to defend itself, which means keeping up with other nations’ military technology. AI can already outsmart humans at <a href="https://www.analyticsinsight.net/all-the-times-ai-has-beaten-humans/">chess and poker</a>. It outperforms humans in the real world too. For example Microsoft claims its speech recognition software has an error rate of 1% compared to the human error rate of around 6%. So it is hardly surprising that armies are slowly handing algorithms the reins. </p>
<p>But how do we avoid adding killer robots to the long list of things we wish we had never invented? First of all: know thy enemy. </p>
<h2>What are Lethal Autonomous Weapons (LAWs)?</h2>
<p>The US <a href="https://irp.fas.org/doddir/dod/d3000_09.pdf">Department of Defence</a> defines an autonomous weapon system as: “A weapon system that, once activated, can select and engage targets without further intervention by a human operator.”</p>
<p>Many combat systems already fit this criteria. The computers on drones and modern missiles have algorithms that can <a href="https://cord.cranfield.ac.uk/articles/poster/Deep_Learning_Techniques_for_Missile_Seeker_Automatic_Target_Recognition/11619462">detect targets</a> and fire at them with far more precision than a human operator. Israel’s Iron Dome is one of several active defence systems that can engage targets without human supervision. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/_eSZaCHXBVA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>While designed for missile defence, the Iron Dome <a href="https://www.timesofisrael.com/iron-dome-almost-knocked-out-israeli-f-15-during-recent-gaza-fighting/">could kill people by accident</a>. But the risk is seen as acceptable in international politics because the Iron Dome generally has a reliable history of protecting civilian lives. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/501875/original/file-20221219-20-rmhn1m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/501875/original/file-20221219-20-rmhn1m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/501875/original/file-20221219-20-rmhn1m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/501875/original/file-20221219-20-rmhn1m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/501875/original/file-20221219-20-rmhn1m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/501875/original/file-20221219-20-rmhn1m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/501875/original/file-20221219-20-rmhn1m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">An Israeli missile defence system</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/beer-sheva-march-27-2007-israeli-115739908">ChameleonsEye/Shutterstock</a></span>
</figcaption>
</figure>
<p>There are AI enabled weapons designed to attack people too, from <a href="https://www.lawfareblog.com/foreign-policy-essay-south-korean-sentry%E2%80%94-killer-robot-prevent-war">robot sentries</a> to <a href="https://www.forbes.com/sites/davidhambling/2022/11/04/russian-videos-reveal-new-details-of-loitering-munitions/?sh=60b3bdc25dbc">loitering kamikaze drones</a> used in the Ukraine war. LAWs are already here. So, if we want to influence the use of LAWs, we need to understand the history of modern weapons. </p>
<h2>The rules of war</h2>
<p>International agreements, such as the <a href="https://www.icrc.org/en/war-and-law/treaties-customary-law/geneva-conventions">Geneva conventions</a> establish conduct for the treatment of prisoners of war and civilians during conflict. They are one of the few tools we have to control how wars are fought. Unfortunately, the use of chemical weapons by the US in Vietnam, and by Russia in Afghanistan, are proof these measures aren’t always successful. </p>
<p>Worse is when key players refuse to sign up. The <a href="http://www.icbl.org/en-gb/home.aspx">International Campaign to Ban Landmines (ICBL)</a> has been lobbying politicians since 1992 to ban mines and cluster munitions (which randomly scatter small bombs over a wide area). In 1997 the <a href="https://www.un.org/disarmament/anti-personnel-landmines-convention/">Ottawa treaty</a> included a ban of these weapons, which 122 countries signed. But the US, China and Russia didn’t buy in. </p>
<p>Landmines have injured and killed at least 5,000 soldiers and civilians per year since 2015 and as many as 9,440 people in 2017. The <a href="http://www.the-monitor.org/en-gb/reports/2022/landmine-monitor-2022/major-findings.aspx">Landmine and Cluster Munition Monitor 2022 report said</a>: </p>
<blockquote>
<p>Casualties…have been disturbingly high for the past seven years, following more than a decade of historic reductions. The year 2021 was no exception. This trend is largely the result of increased conflict and contamination by improvised mines observed since 2015. Civilians represented most of the victims recorded, half of whom were children.</p>
</blockquote>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/deaths-from-landmines-are-on-the-rise-and-clearing-them-all-will-take-decades-171848">Deaths from landmines are on the rise – and clearing them all will take decades</a>
</strong>
</em>
</p>
<hr>
<p>Despite the best efforts of the ICBL, there is evidence both <a href="https://www.hrw.org/news/2022/06/15/background-briefing-landmine-use-ukraine">Russia</a> and <a href="https://www.usnews.com/news/world/articles/2022-04-07/ukraine-is-effectively-using-landmines-in-war-with-russia-u-s-general">Ukraine</a> (a member of the Ottawa treaty) are using landmines during the Russian invasion of Ukraine. Ukraine has also relied on drones to guide artillery strikes, or more recently for <a href="https://inews.co.uk/news/kamikaze-drones-valuable-weapon-both-sides-russia-ukraine-war-1706286">“kamikaze attacks” on Russian infrastructure</a>.</p>
<h2>Our future</h2>
<p>But what about more advanced AI enabled weapons? The <a href="https://www.stopkillerrobots.org/stop-killer-robots/facts-about-autonomous-weapons/">Campaign to Stop Killer Robots</a> lists nine key problems with LAWs, focusing on the lack of accountability, and the inherent dehumanisation of killing that comes with it. </p>
<p>While this criticism is valid, a full ban of LAWs is unrealistic for two reasons. First, much like mines, pandora’s box has already been opened. Also the lines between autonomous weapons, LAWs and killer robots are so blurred it’s difficult to distinguish between them. Military leaders would always be able to find a loophole in the wording of a ban and sneak killer robots into service as defensive autonomous weapons. They might even do so unknowingly. </p>
<figure class="align-center ">
<img alt="Man in baseball cap holds up a placard" src="https://images.theconversation.com/files/501871/original/file-20221219-18-lk9fdi.jpg?ixlib=rb-1.1.0&rect=12%2C0%2C8342%2C5574&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/501871/original/file-20221219-18-lk9fdi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/501871/original/file-20221219-18-lk9fdi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/501871/original/file-20221219-18-lk9fdi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/501871/original/file-20221219-18-lk9fdi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/501871/original/file-20221219-18-lk9fdi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/501871/original/file-20221219-18-lk9fdi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">San Francisco, CA – December 5, 2022: Activists opposed to the implementation of armed police robots rallied at City Hall.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/san-francisco-ca-december-5-2022-2234565109">Phil Pasquini</a></span>
</figcaption>
</figure>
<p>We will almost certainly see more AI enabled weapons in the future. But this doesn’t mean we have to look the other way. More specific and nuanced prohibitions would help keep our politicians, data scientists and engineers accountable. </p>
<p>For example, by banning:</p>
<ul>
<li>black box AI: systems where the user has no information about the algorithm beyond inputs and outputs</li>
<li>unreliable AI: systems that have been poorly tested (such as in the military blockade example mentioned previously).</li>
</ul>
<p>And you don’t have to be an expert in AI to have a view on LAWs. Stay aware of new military AI developments. When you read or hear about AI being used in combat, ask yourself: is it justified? Is it preserving civilian life? If not, engage with the communities that are working to control these systems. Together, we stand a chance at preventing AI from doing more harm than good.</p><img src="https://counter.theconversation.com/content/185243/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jonathan Erskine receives funding from UKRI and Thales Training and Simulation Ltd. </span></em></p><p class="fine-print"><em><span>Miranda Mowbray is affiliated with the University of Bristol, where she gives lectures on AI ethics in the UKRI-funded Centre for Doctoral Training in Interactive AI. She is a member of the advisory council for the Open Rights Group.</span></em></p>
AI enabled weapons (LAWs) are still in their adolescence, which means we still have a chance to influence their development. But we need to act now.
Jonathan Erskine, PhD Student, Interactive AI, University of Bristol
Miranda Mowbray, Lecturer in Interactive AI, University of Bristol
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/192170
2022-10-16T19:02:23Z
2022-10-16T19:02:23Z
‘Killer robots’ will be nothing like the movies show – here’s where the real threats lie
<figure><img src="https://images.theconversation.com/files/489521/original/file-20221013-12-lm966h.jpg?ixlib=rb-1.1.0&rect=143%2C201%2C1386%2C862&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Ghost Robotics Vision 60 Q-UGV.</span> <span class="attribution"><a class="source" href="https://www.dvidshub.net/image/7351259/ghost-robotics-vision-60-q-ugv-demo">US Space Force photo by Senior Airman Samuel Becker</a></span></figcaption></figure><p>You might suppose Hollywood is good at predicting the future. Indeed, Robert Wallace, head of the CIA’s Office of Technical Service and the US equivalent of MI6’s fictional Q, has recounted how Russian spies <a href="https://www.popularmechanics.com/military/a12043/4267549/">would watch the latest Bond movie</a> to see what technologies might be coming their way.</p>
<p>Hollywood’s continuing obsession with killer robots might therefore be of significant concern. The newest such movie is Apple TV’s forthcoming <a href="https://www.thewrap.com/florence-pugh-dolly-movie-murderous-sex-robot-apple-tv-plus/">sex robot courtroom drama Dolly</a>.</p>
<p>I never thought I’d write the phrase “sex robot courtroom drama”, but there you go. Based on a <a href="https://apex-magazine.com/short-fiction/dolly/">2011 short story</a> by Elizabeth Bear, the plot concerns a billionaire killed by a sex robot that then asks for a lawyer to defend its murderous actions.</p>
<h2>The real killer robots</h2>
<p>Dolly is the latest in a long line of movies featuring killer robots – including HAL in Kubrick’s 2001: A Space Odyssey, and Arnold Schwarzenegger’s T-800 robot in the Terminator series. Indeed, conflict between robots and humans was at the centre of the very first feature-length science fiction film, Fritz Lang’s 1927 classic <a href="https://www.britannica.com/topic/Metropolis-film-1927">Metropolis</a>.</p>
<p>But almost all these movies get it wrong. Killer robots won’t be sentient humanoid robots with evil intent. This might make for a dramatic storyline and a box office success, but such technologies are many decades, if not centuries, away.</p>
<p>Indeed, contrary to recent fears, robots may never be sentient.</p>
<p>It’s much simpler technologies we should be worrying about. And these technologies are starting to turn up on the battlefield today in places like Ukraine and <a href="https://www.militarystrategymagazine.com/article/drones-in-the-nagorno-karabakh-war-analyzing-the-data/">Nagorno-Karabakh</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/drones-over-ukraine-fears-of-russian-killer-robots-have-failed-to-materialise-180244">Drones over Ukraine: fears of Russian 'killer robots' have failed to materialise</a>
</strong>
</em>
</p>
<hr>
<h2>A war transformed</h2>
<p>Movies that feature much simpler armed drones, like Angel has Fallen (2019) and Eye in the Sky (2015), paint perhaps the most accurate picture of <a href="https://theconversation.com/eye-in-the-sky-movie-gives-a-real-insight-into-the-future-of-warfare-56684">the real future of killer robots</a>. </p>
<p>On the nightly TV news, we see how modern warfare is being transformed by ever-more autonomous drones, tanks, ships and submarines. These robots are only a little more sophisticated than those you can buy in your local hobby store. </p>
<p>And increasingly, the decisions to identify, track and destroy targets are being handed over to their algorithms. </p>
<p>This is taking the world to a dangerous place, with a host of moral, legal and technical problems. Such weapons will, for example, further upset our troubled geopolitical situation. We already see <a href="https://www.forbes.com/sites/amirhusain/2022/06/30/turkey-builds-a-hyperwar-capable-military/?sh=1500c4b855e1">Turkey emerging as a major drone power</a>.</p>
<p>And such weapons cross a moral red line into a terrible and terrifying world where unaccountable machines decide who lives and who dies. </p>
<p>Robot manufacturers are, however, starting to push back against this future.</p>
<h2>A pledge not to weaponise</h2>
<p>Last week, six leading robotics companies pledged they would <a href="https://www.theguardian.com/technology/2022/oct/07/killer-robots-companies-pledge-no-weapons">never weaponise their robot platforms</a>. The companies include Boston Dynamics, which makes the Atlas humanoid robot, which can <a href="https://youtu.be/knoOXBLFQ-s">perform an impressive backflip</a>, and the Spot robot dog, which looks like it’s <a href="https://youtu.be/wlkCQXHEgjA">straight out of the Black Mirror TV series</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1578400002056953858"}"></div></p>
<p>This isn’t the first time robotics companies have spoken out about this worrying future. Five years ago, I organised <a href="https://newsroom.unsw.edu.au/news/science-tech/world%E2%80%99s-tech-leaders-urge-un-ban-killer-robots">an open letter</a> signed by Elon Musk and more than 100 founders of other AI and robot companies calling for the United Nations to regulate the use of killer robots. The letter even knocked the Pope into third place for a <a href="https://newsroom.unsw.edu.au/news/science-tech/unsws-toby-walsh-voted-runner-global-award">global disarmament award</a>.</p>
<p>However, the fact that leading robotics companies are pledging not to weaponise their robot platforms is more virtue signalling than anything else.</p>
<p>We have, for example, already seen <a href="https://www.vice.com/en/article/m7gv33/robot-dog-not-so-cute-with-submachine-gun-strapped-to-its-back">third parties mount guns</a> on clones of Boston Dynamics’ Spot robot dog. And such modified robots have proven effective in action. Iran’s top nuclear scientist was <a href="https://www.nytimes.com/2021/09/18/world/middleeast/iran-nuclear-fakhrizadeh-assassination-israel.html">assassinated by Israeli agents</a> using a robot machine gun in 2020.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lethal-autonomous-weapons-and-world-war-iii-its-not-too-late-to-stop-the-rise-of-killer-robots-165822">Lethal autonomous weapons and World War III: it's not too late to stop the rise of 'killer robots'</a>
</strong>
</em>
</p>
<hr>
<h2>Collective action to safeguard our future</h2>
<p>The only way we can safeguard against this terrifying future is if nations collectively take action, as they have with chemical weapons, biological weapons and even nuclear weapons.</p>
<p>Such regulation won’t be perfect, just as the regulation of chemical weapons isn’t perfect. But it will prevent arms companies from openly selling such weapons and thus their proliferation. </p>
<p>Therefore, it’s even more important than a pledge from robotics companies to see the UN Human Rights council <a href="https://www.ohchr.org/en/news/2022/10/human-rights-council-adopts-six-resolutions-appoints-special-rapporteur-situation">has recently unanimously decided</a> to explore the human rights implications of new and emerging technologies like autonomous weapons. </p>
<p>Several dozen nations have already called for the UN to regulate killer robots. The European Parliament, the African Union, the UN Secretary General, Nobel peace laureates, church leaders, politicians and thousands of AI and robotics researchers like myself have all called for regulation. </p>
<p>Australian is not a country that has, so far, supported these calls. But if you want to avoid this Hollywood future, you may want to take it up with your political representative next time you see them.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/new-zealand-could-take-a-global-lead-in-controlling-the-development-of-killer-robots-so-why-isnt-it-166168">New Zealand could take a global lead in controlling the development of 'killer robots' — so why isn't it?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/192170/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
The sentient, murderous humanoid robot is a complete fiction, and may never become reality. But that doesn’t mean we’re safe from autonomous weapons – they are already here.
Toby Walsh, Professor of AI at UNSW, Research Group Leader, UNSW Sydney
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/188983
2022-08-22T05:24:42Z
2022-08-22T05:24:42Z
Virtual reality, autonomous weapons and the future of war: military tech startup Anduril comes to Australia
<figure><img src="https://images.theconversation.com/files/480252/original/file-20220822-65285-8lcycw.png?ixlib=rb-1.1.0&rect=17%2C6%2C1479%2C992&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Anduril</span></span></figcaption></figure><p>Earlier this month, posters started going up around Sydney advertising an event called “In the Ops Room, with Palmer Luckey”. Rather than an album launch or standup gig, this turned out to be a free talk given last week by the chief executive of a high-tech US defence company called Anduril.</p>
<p>The company has set up an Australian arm, and Luckey is in town to <a href="https://prwire.com.au/pr/104375/from-oculus-rift-to-military-tech-palmer-luckey-is-in-australia">entice</a> “brilliant technologists in military engineering” to sign on. </p>
<p>Anduril makes a software system called <a href="https://www.anduril.com/lattice/">Lattice</a>, an “autonomous sensemaking and command & control platform” with a strong surveillance focus which is used on <a href="https://www.washingtonpost.com/national-security/2022/03/11/mexico-border-surveillance-towers/">the US–Mexico border</a>. The company also produces <a href="https://www.cnet.com/science/palmer-luckey-ghost-4-military-drones-can-swarm-into-an-ai-surveillance-system/">flying drones</a> and has a deal to produce <a href="https://www.theguardian.com/australia-news/2022/aug/17/robotic-submarines-fast-tracked-for-sydney-harbour-to-bridge-defence-capability-gap">three robotic submarines</a> for Australia, with <a href="https://www.anduril.com/hardware/dive-ld/">capabilities</a> for surveillance, reconnaissance, and warfare. </p>
<p>The PR splash is unusual from the normally secretive world of military technology. But Luckey’s talk opened a window onto the future as seen <a href="https://www.anduril.com/mission/">by a company</a> “transforming US & allied military capabilities with advanced technology”.</p>
<h2>From Oculus to Anduril</h2>
<figure class="align-right ">
<img alt="a poster advertising the Luckey talk, pasted to an electricity box on a street in inner Sydney" src="https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=731&fit=crop&dpr=1 600w, https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=731&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=731&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=919&fit=crop&dpr=1 754w, https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=919&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/480215/original/file-20220821-38135-rms2jq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=919&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">One of the posters advertising the Anduril talk in Sydney.</span>
<span class="attribution"><span class="source">Photo by Julia Scott-Stevenson</span></span>
</figcaption>
</figure>
<p>Unlike most defence tech moguls, Luckey got his start in the world of immersive tech and gaming. </p>
<p>While at college, the Anduril founder had <a href="https://www.latimes.com/entertainment/herocomplex/la-et-hc-palmer-luckey-s-oculus-rift-could-be-a-virtual-reality-breakthrough-20160326-story.html">a brief stint</a> at a military-affiliated mixed reality research lab at the University of Southern California, then set up his own virtual reality headset company called Oculus VR. In 2014, at the age of 21, Luckey sold Oculus to Facebook for US$2 billion.</p>
<p>In 2017 Luckey was fired by Facebook for reasons that were never made public. According to <a href="https://www.cnet.com/tech/tech-industry/facebook-reportedly-fired-palmer-luckey-for-political-views/">some reports</a>, the issue was Luckey’s support for the presidential campaign of Donald Trump. </p>
<p>Luckey’s next move, with backing from right-wing venture capitalist Peter Thiel’s Founder’s Fund, was to <a href="https://www.forbes.com/sites/jeremybogaisky/2022/06/03/palmer-luckey-anduril/">set up Anduril</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1006019397280862210"}"></div></p>
<h2>Finding new markets</h2>
<p>Since Luckey’s departure, Facebook (now known as Meta) has broadened its efforts beyond the virtual and augmented reality market. A forthcoming <a href="https://www.techradar.com/news/project-cambria-is-metas-most-important-vr-headset-right-now">“mixed reality” headset</a> plays a key role in its plans for a metaverse being pitched to business and industry as well as consumers.</p>
<p>We can see similar pivots from consumers to enterprise across the immersive tech industry. Magic Leap, makers of a much hyped mixed-reality headset, later imploded and re-emerged <a href="https://www.theverge.com/2020/6/16/21274638/magic-leap-app-store-partnerships-update">focusing on healthcare</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/potential-for-harm-microsoft-to-make-us-22-billion-worth-of-augmented-reality-headsets-for-us-army-158308">'Potential for harm': Microsoft to make US$22 billion worth of augmented reality headsets for US Army</a>
</strong>
</em>
</p>
<hr>
<p>Microsoft’s mixed-reality headset, the HoloLens, was initially seen at <a href="https://docubase.mit.edu/project/terminal-3/">international film festivals</a>. However, the HoloLens 2, released in 2019, was marketed solely to businesses. </p>
<p>Then, in 2021, Microsoft won a ten-year, US$22 billion contract to provide the US Army with <a href="https://theconversation.com/potential-for-harm-microsoft-to-make-us-22-billion-worth-of-augmented-reality-headsets-for-us-army-158308">120,000 head-mounted displays</a>. Known as “Integrated Visual Augmentation Systems”, these headsets include a range of technologies such as thermal sensors, a heads-up display and machine learning for training situations.</p>
<h2>Fulfilling work?</h2>
<p>Speaking to the Sydney audience on Thursday, Luckey framed his own shift to defence not as one of economic necessity, but of personal fulfilment. He described saying “your job is worthless” to new recruits in social media companies making games or augmented reality filters. </p>
<p>That kind of work is fun but ultimately meaningless, he says, whereas working for Anduril would be “professionally fulfilling, spiritually fulfilling, fiscally fulfilling”. </p>
<p>Not all technology workers would agree that defence contracts are spiritually fulfilling. In 2018, Google employees revolted <a href="https://www.fastcompany.com/40578996/the-threat-of-weaponized-a-i-is-tearing-google-apart">against Project Maven</a>, an AI effort for the Pentagon. Staff at <a href="https://www.theverge.com/2019/2/22/18236116/microsoft-HoloLens-army-contract-workers-letter">Microsoft</a> and <a href="https://www.vice.com/en/article/xgx3ww/unity-ceo-promises-employees-their-work-will-never-lead-to-loss-of-life">Unity</a> have also expressed consternation over military involvement. </p>
<h2>‘Billions of robots’</h2>
<p>The first audience question on Thursday asked Luckey about the risks of autonomous AI – weapons run by software that can make its own decisions. </p>
<p>Luckey said he was worried about the potential of autonomy to do “really spooky things”, but much more concerned about “very evil people using very basic AI”. He suggested there was no moral high ground in refusing to work on autonomous weapons, as the alternative was “less principled people” working on them. </p>
<p>Luckey did say Anduril will always have a “human in the loop”: “[The software] is not making any life or death decisions without a person who’s directly responsible for that happening.” </p>
<p>This may be current policy, but it seems at odds with Luckey’s vision of the future of war. Earlier in the evening, he painted a picture:</p>
<blockquote>
<p>You’re going to see much larger numbers of systems [in conflicts] … you can’t have, let’s say, billions of robots that are all acting together, if they all have to be individually piloted directly by a person, it’s just not going to work, so autonomy is going to be critical for that.</p>
</blockquote>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616">UN fails to agree on 'killer robot' ban as nations pour billions into autonomous weapons research</a>
</strong>
</em>
</p>
<hr>
<p>Not everyone is as sanguine about the autonomous weapons arms race as Luckey. Thousands of scientists have <a href="https://www.cnet.com/science/elon-musk-google-deepmind-pledge-no-deadly-ai-autonomous-weapons/">pledged</a> not to develop lethal autonomous weapons.</p>
<p>Australian AI expert Toby Walsh, among others, has <a href="https://www.nytimes.com/2019/07/30/science/autonomous-weapons-artificial-intelligence.html">made the case</a> that “the best time to ban such weapons is before they’re available”.</p>
<h2>Choose your future</h2>
<p>My <a href="https://immerse.news/virtual-futures-a-manifesto-for-immersive-experiences-ffb9d3980f0f">own research</a> has explored the potential of immersive media technologies to help us imagine pathways to a future we want to live in. </p>
<p>Luckey seems to argue he wants the same: a use for these incredible technologies beyond augmented reality cat filters and “worthless” games. Unfortunately his vision of that future is in the zero-sum framing of an arms race, with surveillance and AI weapons at the core (and perhaps even “billions of robots acting together”). </p>
<p>During Luckey’s talk, he mentioned that Anduril Australia is working on other projects beyond the robotic subs, but he couldn’t share what these were. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/australias-pursuit-of-killer-robots-could-put-the-trans-tasman-alliance-with-new-zealand-on-shaky-ground-188520">Australia's pursuit of 'killer robots' could put the trans-Tasman alliance with New Zealand on shaky ground</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/188983/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Julia Scott-Stevenson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Anduril says it is “transforming US & allied military capabilities with advanced technology” – and it’s setting up shop in Australia.
Julia Scott-Stevenson, Chancellor's Postdoctoral Research Fellow, University of Technology Sydney
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/188520
2022-08-21T20:03:06Z
2022-08-21T20:03:06Z
Australia’s pursuit of ‘killer robots’ could put the trans-Tasman alliance with New Zealand on shaky ground
<figure><img src="https://images.theconversation.com/files/479984/original/file-20220818-546-nyccc.jpg?ixlib=rb-1.1.0&rect=0%2C242%2C8986%2C4944&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Getty Images</span></span></figcaption></figure><p>Australia’s recently <a href="https://www.defence.gov.au/about/reviews-inquiries/defence-strategic-review">announced</a> defence review, intended to be the most thorough in almost four decades, will give us a good idea of how Australia sees its role in an increasingly tense strategic environment.</p>
<p>As New Zealand’s only formal military ally, Australia’s defence choices will have significant implications, both for New Zealand and regional geopolitics.</p>
<p>There are several areas of contention in the trans-Tasman relationship. One is Australia’s pursuit of nuclear-powered submarines, which clashes with New Zealand’s anti-nuclear stance. Another lies in the two countries’ diverging approaches to autonomous weapons systems (AWS), colloquially known as “killer robots”. </p>
<figure class="align-center ">
<img alt="Boeing Australia's autonomous 'loyal wingman' aircraft" src="https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Boeing Australia is developing autonomous ‘loyal wingman’ aircraft to complement manned aircraft.</span>
<span class="attribution"><a class="source" href="https://www.flightglobal.com/defence/boeing-australia-pushes-loyal-wingman-maiden-flight-to-2021/141691.article">Boeing</a>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>In general, AWS are <a href="https://www.beehive.govt.nz/sites/default/files/2021-11/Autonomous-Weapons-Systems-Cabinet-paper.pdf">considered</a> to be “weapons systems that, once activated, can select and engage targets without further human intervention”. There is, however, no internationally agreed definition.</p>
<p>New Zealand is involved with international attempts to ban and regulate AWS. It seeks a ban on systems that “are not sufficiently predictable or controllable to meet legal or ethical requirements” and advocates for “rules or limits to govern the development and use of AWS”. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1424978867614228485"}"></div></p>
<p>If this seems vague to you, it should. This ambiguity in definition makes it difficult to determine which systems New Zealand seeks to ban or regulate.</p>
<h2>Australia’s prioritisation of AWS</h2>
<p>Australia, meanwhile, has been developing what it more commonly refers to as robotics and autonomous systems (RAS) with <a href="https://www.tandfonline.com/doi/full/10.1080/10357718.2022.2095615">gusto</a>. Since 2016, Australia has identified RAS as a priority area of development and substantially increased <a href="https://www.dst.defence.gov.au/nextgentechfund">funding</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/new-zealand-could-take-a-global-lead-in-controlling-the-development-of-killer-robots-so-why-isnt-it-166168">New Zealand could take a global lead in controlling the development of 'killer robots' — so why isn't it?</a>
</strong>
</em>
</p>
<hr>
<p>The Australian <a href="https://www.navy.gov.au/sites/default/files/documents/RAN_WIN_RASAI_Strategy_2040f2_hi.pdf">navy</a>, <a href="https://researchcentre.army.gov.au/sites/default/files/2020-03/robototic_autonomous_systems_strategy.pdf">army</a> and defence force (<a href="https://tasdcrc.com.au/wp-content/uploads/2020/12/ADF-Concept-Robotics.pdf">ADF</a>) have each released concept documents since 2018, discussing RAS and their associated benefits, risks, challenges and opportunities.</p>
<p>Key systems Australia is pursuing include the autonomous aircraft <a href="https://news.defence.gov.au/service/introducing-ghost-bat">Ghost Bat</a>, three different kinds of <a href="https://www.australiandefence.com.au/defence/sea/navy-s-uncrewed-undersea-plans">extra-large underwater autonomous vehicles</a> and <a href="https://www.minister.defence.gov.au/minister/melissa-price/media-releases/autonomous-truck-project-passes-major-milestone">autonomous trucks</a>.</p>
<h2>Why is Australia seeking to develop these technologies?</h2>
<p>The short answer is three-fold: seeking military advantage, saving lives and economics.</p>
<p>Australia and its allies and partners, particularly the US, are <a href="https://www.ussc.edu.au/analysis/us-china-technology-competition-and-what-it-means-for-australia">fearful</a> of losing the technological superiority they have long held over rivals such as China. </p>
<p>Large military capabilities, like nuclear-powered submarines, take both time and money to acquire. Australia is further limited in what it can do by the size of its defence force. RAS are seen as a way to potentially maintain advantage, and to do more with less.</p>
<p>RAS are also seen as a way to save lives. A <a href="https://media.defense.gov/2020/Nov/23/2002540369/-1/-1/1/WYATT.PDF">survey</a> of Australian military personnel found they considered reduction of harm and injury to defence personnel, allied personnel and civilians among the most important potential benefits of RAS. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616">UN fails to agree on 'killer robot' ban as nations pour billions into autonomous weapons research</a>
</strong>
</em>
</p>
<hr>
<p>The Australian Defence Force also <a href="https://tasdcrc.com.au/wp-content/uploads/2020/12/ADF-Concept-Robotics.pdf">believes</a> RAS will be cheaper than large platforms. Inflation means money already committed to defence has less purchasing power. RAS present an opportunity to achieve the same outcomes at a lower cost.</p>
<p>Meanwhile, in 2018, the Australian government outlined its intention to become a top-ten <a href="https://www.ft.com/content/d743d758-04b2-11e8-9650-9c0ad2d7c5b5">defence exporter</a>. There are keen <a href="https://breakingdefense.com/2022/03/aussies-aim-for-1b-in-exports-of-loyal-wingman-now-ghost-bat/">hopes</a> the Ghost Bat will become a successful defence export. </p>
<p>At the same time, the government is keen to <a href="https://apo.org.au/sites/default/files/resource-files/2016-02/apo-nid93621.pdf">build</a> closer ties between defence, industry and academia. Industry and academia both vie for defence funding, and this drives development of RAS.</p>
<p>Of course, the technology is new. It’s not guaranteed RAS will save lives, save money or achieve military advantage. The extent to which RAS will be used, and what they will be used for, is not foreseeable. It is in this uncertainty that New Zealand must make judgments about AWS and alliance management.</p>
<figure class="align-center ">
<img alt="Armed Autonomous aerial vehicle on runway" src="https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Autonomous systems are seen as a way to save lives.</span>
<span class="attribution"><span class="source">Getty Images</span></span>
</figcaption>
</figure>
<h2>What this means for the trans-Tasman relationship</h2>
<p>The nuclear-powered submarines captured attention when Australia’s new AUKUS partnership with the US and UK was announced, but its primary purpose is a much broader partnership that shares defence technology, including RAS. </p>
<p>The most recent statement from the AUKUS working groups <a href="https://www.gov.uk/government/news/readout-of-aukus-joint-steering-group-meetings--2">says</a> they “will seek opportunities to engage allies and close partners”. Last week, US Deputy Secretary of State Wendy Sherman made it clear New Zealand was one such <a href="https://www.rnz.co.nz/news/political/472583/us-would-have-conversations-with-new-zealand-if-time-comes-for-others-to-join-aukus-top-diplomat">partner</a>.</p>
<p>Australia’s focus on RAS, particularly in the context of AUKUS, may soon bring alliance questions to the fore. Strategic studies expert Robert Ayson has argued AUKUS, combined with increased strategic tension, <a href="https://pacforum.org/publication/pacnet-48-new-zealand-and-aukus-affected-without-being-included">means</a> that “year by year New Zealand’s alliance commitment to the defence of Australia will carry bigger implications”. AWS will play a role in these implications.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/nukes-allies-weapons-and-cost-4-big-questions-nzs-defence-review-must-address-188732">Nukes, allies, weapons and cost: 4 big questions NZ's defence review must address</a>
</strong>
</em>
</p>
<hr>
<p>AWS may seem an insignificant trans-Tasman difference compared to the use of nuclear technologies. But AWS come with a lot more uncertainty and fuzziness than, say, <a href="https://www.smh.com.au/world/oceania/not-in-our-waters-ardern-says-no-to-visits-from-australia-s-new-nuclear-subs-20210916-p58s7k.html">banning</a> nuclear-powered submarines in New Zealand waters. This fuzziness creates ample room for misperceptions and poor communication.</p>
<p>Trust in alliance relationships is easily damaged, and difficult to manage. Clear communication and ensuring a good understanding of each other’s positions is essential. The ambiguity of AWS makes these things difficult. </p>
<p>New Zealand and Australia may need to clarify their respective positions before Australia’s defence review is released next March. Otherwise, they run the risk of fuelling misunderstandings at a delicate moment for trans-Tasman relations.</p><img src="https://counter.theconversation.com/content/188520/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sian Troath receives funding from The Royal Society of New Zealand Marsden Fund.</span></em></p>
Diverging views on automated weapons systems could make it difficult for Australia and New Zealand to manage military ties at a delicate time in trans-Tasman relations.
Sian Troath, Postdoctoral fellow, University of Canterbury
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/188316
2022-08-05T16:44:40Z
2022-08-05T16:44:40Z
Bladed ‘Ninja’ missile used to kill al-Qaida leader is part of a scary new generation of unregulated weapons
<p>The recent killing of al-Qaida leader <a href="https://theconversation.com/afghanistan-assassination-of-al-qaida-chief-reveals-tensions-at-the-top-of-the-taliban-188133">Ayman al-Zawahiri </a> by CIA drone strike was the latest <a href="https://www.whitehouse.gov/briefing-room/speeches-remarks/2022/08/01/remarks-by-president-biden-on-a-successful-counterterrorism-operation-in-afghanistan/">US response to 9/11</a>. Politically, it amplified existing distrust between US leaders and the Taliban government in Afghanistan. The killing also exposed compromises in the <a href="https://www.bbc.co.uk/news/world-asia-51689443">2020 Doha peace agreement</a> between the US and the Taliban.</p>
<p>But another story is emerging with wider implications: the speed and nature of international weapons development. Take the weapon reportedly used to kill al-Zawahiri: the <a href="https://www.lemonde.fr/en/international/article/2022/08/03/ayman-al-zawahiri-s-death-what-is-the-hellfire-r9x-missile-that-the-americans-purportedly-used_5992310_4.html">Hellfire R9X “Ninja” missile</a>.</p>
<p>The Hellfire missile was originally conceived in the 1970s and 80s to destroy Soviet tanks. Rapid improvements from the 1990s onwards have <a href="https://www.thedefensepost.com/2021/03/22/agm-114-hellfire-missile/">resulted in multiple variations</a> with different capabilities. They can be launched from helicopters or Reaper drones. Their <a href="https://asc.army.mil/web/portfolio-item/hellfire-family-of-missiles/">different explosive payloads</a> can be set off in different ways: on impact or before impact.</p>
<p>Then there is the Hellfire R9X “Ninja”. It is not new, though it has remained largely in the shadows for five years. It was reportedly used in 2017 in Syria to <a href="https://www.wsj.com/articles/secret-u-s-missile-aims-to-kill-only-terrorists-not-nearby-civilians-11557403411">kill the deputy al-Qaida leader</a>, Abu Khayr al-Masri.</p>
<p>The Ninja missile does not rely on an explosive warhead to destroy or kill its target. It uses the speed, accuracy and kinetic energy of a 100-pound missile fired from up to 20,000 feet, armed with <a href="https://www.thedefensepost.com/2021/03/22/agm-114-hellfire-missile/">six blades</a> which deploy in the last moments before impact.</p>
<h2>‘Super weapons’</h2>
<p>The Ninja missile is the ultimate attempt – thus far – to accurately target and kill a single person. No explosion, no widespread destruction, and no deaths of bystanders. </p>
<p>But other weapon developments will also affect the way we live and how wars are fought or deterred. Russia has <a href="https://www.chathamhouse.org/2021/09/advanced-military-technology-russia/03-putins-super-weapons">invested heavily</a> in these so-called super-weapons, building on older technologies. They aim to reduce or eliminate technological advantages enjoyed by the United States or Nato. </p>
<p>Rusia’s hypersonic missile development aims are highly ambitious. The <a href="https://www.chathamhouse.org/2021/09/advanced-military-technology-russia/03-putins-super-weapons">Avangard</a> missile, for example, won’t need to fly outside the earth’s atmosphere. It will remain within the upper atmosphere instead, giving it the ability to manoeuvre. </p>
<p>Such manoeuvrability will make it harder to detect or intercept. China’s <a href="https://eurasiantimes.com/china-flashes-rare-footage-of-hypersonic-missile-army-day/">DF-17 hypersonic ballistic missile</a> is similarly intended to evade US missile defences.</p>
<h2>The autonomous era</h2>
<p>At a smaller scale, <a href="https://metro.co.uk/2021/10/14/lethal-robot-dogs-now-have-assault-rifles-attached-to-their-backs-15420004/">robot dogs with mounted machine guns</a> are emerging on the weapons market. The weapon development company <a href="https://sword-int.com/the-sword-story/">Sword International</a> took a Ghost Robotics quadrupedal unmanned ground vehicle – or dog robot – and mounted an assault rifle on it. It was one of three robot dogs on <a href="https://www.independent.co.uk/tv/editors-picks/robot-dog-rifle-black-mirror-vf8b11fde">display at a US army trade show</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1449342876408713220"}"></div></p>
<p>Turkey, meanwhile, is claiming it has developed <a href="https://www.trtworld.com/magazine/a-series-of-autonomous-drones-gives-turkey-a-military-edge-47201">four types of autonomous drones</a>, which can identify and kill people, all without input from a human operator, or GPS guidance. According to a <a href="https://documents-dds-ny.un.org/doc/UNDOC/GEN/N21/037/72/PDF/N2103772.pdf?OpenElement">UN report</a> from March 2021, such an autonomous weapon system has been used already in Libya against a logistics convoy affiliated with the Khalifa Haftar armed group.</p>
<p>Autonomous weapons that don’t need GPS guidance are particularly significant. In a future war between major powers, the satellites which provide GPS navigation can expect to be shot down. So any military system or aircraft which relies on GPS signals for navigation or targeting would be rendered ineffective. </p>
<p><a href="https://spacenews.com/pentagon-report-china-amassing-arsenal-of-anti-satellite-weapons/">China</a>, <a href="https://www.bbc.co.uk/news/science-environment-59299101">Russia</a>, India and the <a href="https://www.bbc.co.uk/news/science-environment-59299101">USA</a> have developed weapons to destroy satellites which provide global positioning for car sat-nav systems and civilian aircraft guidance. </p>
<p>The real nightmare scenario is combining these, and many more, weapon systems with artificial intelligence. </p>
<h2>New rules of war</h2>
<p>Are new laws or treaties needed to limit these futuristic weapons? In short, yes but they don’t look likely. The US has <a href="https://www.cnbc.com/2022/04/18/us-to-end-anti-satellite-asat-testing-calls-for-global-agreement.html">called for</a> a global agreement to stop anti-satellite missile testing – but there has been no uptake. </p>
<p>The closest to an agreement is the signing of <a href="https://www.nasa.gov/specials/artemis-accords/index.html">NASA’s Artemis Accords</a>. These are principles to promotes peaceful use of space exploration. But they only apply to “<a href="https://www.nasa.gov/specials/artemis-accords/img/Artemis-Accords-signed-13Oct2020.pdf">civil space activities conducted by the civil space agencies</a>” of the signatory countries. In other words, the agreement does not extend to military space activities or terrestrial battlefields. </p>
<p>In contrast, the US <a href="https://www.defense.gov/News/News-Stories/Article/Article/1924779/us-withdraws-from-intermediate-range-nuclear-forces-treaty/#:%7E:text=The%20United%20States%20has%20officially,the%20nations%20involved%20could%20pursue.">has withdrawn</a> from the Intermediate-Range Nuclear Forces Treaty. This is part of a long-term <a href="https://edition.cnn.com/2019/02/01/politics/nuclear-treaty-trump/index.html">pattern of withdrawal from global agreements</a> by US administrations. </p>
<p>Lethal autonomous weapon systems are a special class of emerging weapon system. They incorporate machine learning and other types of AI so that they can make their own decisions and act without direct human input. In 2014 the International Committee of the Red Cross (ICRC) <a href="https://www.icrc.org/en/document/report-icrc-meeting-autonomous-weapon-systems-26-28-march-2014">brought experts together</a> to identify issues raised by autonomous weapon systems. </p>
<p>In 2020 the ICRC and the Stockholm International Peace Research Institute went further, bringing together international experts to identify what <a href="https://www.sipri.org/media/press-release/2020/new-sipri-and-icrc-report-identifies-necessary-controls-autonomous-weapons">controls on autonomous weapon systems </a> would be needed.</p>
<p>In 2022, discussions are ongoing between countries <a href="https://meetings.unoda.org/meeting/ccw-gge-2019/">the UN first brought together</a> in 2017. This group of governmental experts continues to debate the development and use of lethal autonomous weapon systems. However, there has still been no international agreement on a new law or treaty to limit their use.</p>
<h2>New rules for autonomous weapon systems</h2>
<p>The campaign group, Stop the Killer Robots, has called throughout this period for an <a href="https://www.stopkillerrobots.org/">international ban</a> on lethal autonomous weapon systems. Not only has that not happened, there is an undeclared <a href="https://www.e-ir.info/2020/04/15/introducing-guiding-principles-for-the-development-and-use-of-lethal-autonomous-weapon-systems/">stalemate in the UN’s discussions</a> on autonomous weapons in Geneva. </p>
<p>Australia, Israel, Russia, South Korea and the US have <a href="https://una.org.uk/news/minority-states-block-progress-regulating-killer-robots">opposed a new treaty</a> or political declaration. Opposing them at the same talks, 125 member states of the Non-Aligned Movement are calling for <a href="https://documents.unoda.org/wp-content/uploads/2021/06/NAM.pdf">legally binding restrictions</a> on lethal autonomous weapon systems. With Russia, China, US, UK and France all having a UN Security Council veto, they can prevent such a binding law on autonomous weapons.</p>
<p>Outside these international talks and campaigning organisations, independent experts are proposing alternatives. For example, in 2019 ethicist, Deane-Peter Baker brought together the Canberra Group of independent international. The group produced <a href="https://www.e-ir.info/2020/04/15/guiding-principles-for-the-development-and-use-of-laws-version-1-0/">a report</a>, Guiding Principles for the Development and Use of Lethal Autonomous Weapon Systems.</p>
<p>These principles don’t solve the political impasse between superpowers. But if autonomous weapons are here to stay then it is an early attempt to understand what new rules will be needed.</p>
<p>When Pandora’s mythical box was opened, untold horrors were unleashed on the world. Emerging weapon systems are all too real. Like Pandora, all we are left with is hope.</p><img src="https://counter.theconversation.com/content/188316/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Lee does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
New weapons will require new rules of war – but there is little appetite for regulation.
Peter Lee, Professor of Applied Ethics and Director, Security and Risk Research, University of Portsmouth
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/185399
2022-06-21T02:38:27Z
2022-06-21T02:38:27Z
‘Bet you’re on the list’: how criticising ‘smart weapons’ got me banned from Russia
<figure><img src="https://images.theconversation.com/files/469892/original/file-20220621-13-ukl5qx.jpeg?ixlib=rb-1.1.0&rect=0%2C24%2C4031%2C2993&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://photos.aap.com.au/search/20220614001669403745">Pavel Nemecek / AP</a></span></figcaption></figure><p>I woke up on Friday morning a pawn in a Kafka-esque story. Except I hadn’t been transformed into a chess piece but was a diplomatic pawn, a small player in a much larger international story. I read the news that I and 119 other “prominent” Australians were <a href="https://www.theguardian.com/world/2022/jun/16/russia-bans-121-australians-including-journalists-and-defence-officials">banned from travelling to Russia “indefinitely”</a>. </p>
<p>The Russian sanctions were a response to <a href="https://www.dfat.gov.au/international-relations/security/sanctions/sanctions-regimes/russia-sanctions-regime">Western sanctions</a> and the “spreading of false information about Russia”. The Russian Foreign Ministry announced 121 people had been sanctioned but, in a beautifully Russian bureaucratic bungle, Air Vice-Marshal Darren Goldie was banned twice, making it just 120 of us on <a href="https://www.mid.ru/ru/foreign_policy/news/1818118/">the list</a>. </p>
<p>As usual, I was the second person in my family to know. My wife had woken before me and was listening to the news. “Russia has banned a bunch more Australians,” she told me. “Bet you’re on the list.” </p>
<p>The rest of the list was made up of journalists, business people, army officials, politicians and the odd academic like myself. What unites us is our outspoken criticism of Russia’s actions in Ukraine. </p>
<h2>No more trips to Russia</h2>
<p>This is one club of which I am proud to be a member. </p>
<p>And rather than silence the critics, Russia’s actions only give our concerns more exposure. After all, you wouldn’t be reading this if Russia hadn’t banned me.</p>
<p>I have a number of Russian friends and colleagues that I am saddened now not to be able to visit. I was at a conference in Moscow a few years ago and had a great time. I promised then to return to see the delights of St Petersburg. </p>
<p>And I always imagined one day I’d follow Paul Theroux’s footsteps on the trans-Siberian express. But it seems I will now only ever read about such adventures from the comfort of my armchair. </p>
<h2>AI-powered landmines</h2>
<p>This brings me to my outspoken criticism of Russia’s actions in Ukraine. </p>
<p>At the start of last week, I had the pleasure to speak about artificial intelligence (AI) at <a href="https://www.theregister.com/2022/06/10/devfest_for_ukraine_june_1415/">DevFest Ukraine</a>, an online charity event put on by the tech community that raised over US$100,000 for those impacted by Russia’s invasion. And, in acknowledging the ownership of the land on which I was speaking, I acknowledged the ownership of all lands illegally occupied including those in Ukraine. </p>
<p>But I am sure it was another act that was the cause of my sanction: casting doubt on Russia’s claims about AI. In April, I was interviewed for <a href="https://www.theaustralian.com.au/business/technology/a-russian-claim-that-its-devastating-antipersonnel-mines-can-distinguish-between-soldiers-and-civilians-is-bogus-says-australias-toby-walsh/news-story/6bdd96cf39f9a0bf96c5a2de3af9a512">a story about Russian weaponry</a> in the Australian – and as the author is the only tech journalist who made the Russian list, I’m confident that article is to blame. </p>
<p>I can just imagine the Russian official in some non-descript office in bowels of the Foreign Ministry reading the Australian and pulling out the file to which my name was added. </p>
<p>The article reported my significant concerns about Russia’s use of <a href="https://www.hrw.org/news/2022/03/29/ukraine-russia-uses-banned-antipersonnel-landmines">the “smart” AI-enabled POM-3 anti-personnel mine in Ukraine</a>. </p>
<p>Such mines are banned by the 1997 <a href="https://www.un.org/disarmament/anti-personnel-landmines-convention/">Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines</a> (informally known as the Ottawa Treaty or the Anti-Personnel Mine Ban Convention). Russia is not a party to this treaty but <a href="https://treaties.un.org/Pages/ViewDetails.aspx?src=TREATY&mtdsg_no=XXVI-5&chapter=26&clang=_en">164 states are parties to it</a>, including Australia and every country in Europe including Ukraine. </p>
<h2>A barbaric weapon</h2>
<p>The <a href="https://cat-uxo.com/explosive-hazards/landmines/pom-3-landmine">POM-3 is a particularly barbaric mine</a>, designed to cause maximum damage to humans. It’s a descendant of the German “<a href="https://www.warhistoryonline.com/war-articles/bouncing-betty.html">Bouncing Betty</a>” mine used in World War II. </p>
<p>When the mine is triggered, an expelling charge projects the warhead roughly one metre above ground level, at which point the warhead detonates. The warhead is packed with toothed rings designed to harm vital organs in a target’s body many metres away. </p>
<p>The mine is triggered by a seismic sensor that detects approaching footsteps. </p>
<p>Russia claims the mine is equipped with AI that can <a href="https://www.newscientist.com/article/2314453-russia-claims-smart-landmines-used-in-ukraine-only-target-soldiers/">recognise friendly soldiers</a>, thus minimising the risk of collateral damage. </p>
<p>This is an absurd claim. The footsteps of Ukrainian and Russian soldiers will produce the same seismic footprint. No AI can tell them apart. </p>
<h2>Not too late to limit AI weapons</h2>
<p>Russia’s wild claim illustrates a worrying trend where states will say weapons use “AI” to target combatants rather than civilians. Handing over battlefield decision-making to AI is a hugely dangerous proposition.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lethal-autonomous-weapons-and-world-war-iii-its-not-too-late-to-stop-the-rise-of-killer-robots-165822">Lethal autonomous weapons and World War III: it's not too late to stop the rise of 'killer robots'</a>
</strong>
</em>
</p>
<hr>
<p>And this is just one the many dangers of AI in warfare. Others include the lowering of the barriers to war, and the development of new weapons of mass destruction.</p>
<p>Fortunately, it’s not too late to regulate this space. Indeed, the increasing use of hi-tech drones in the conflict in Ukraine has been a wake-up call to militaries around the world that technologies like this are fundamentally <a href="https://www.dw.com/en/ukraine-how-drones-are-changing-the-way-of-war/a-61681013">changing how we fight wars</a>. </p>
<p>Discussions are moving slowly at the United Nations to limit the use of lethal autonomous weapons. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616">UN fails to agree on 'killer robot' ban as nations pour billions into autonomous weapons research</a>
</strong>
</em>
</p>
<hr>
<p>Australia has an opportunity to take leadership in this area. Australia has long been at the forefront of international efforts to combat the spread of chemical and biological weapons but has taken a back seat in the diplomatic efforts around autonomous weapons. </p>
<p>It’s time we took up the cause of regulating weapons that use AI to identify, track and target humans. I could then get back to reading about the wonderful history of Russia from my armchair.</p><img src="https://counter.theconversation.com/content/185399/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh receives funding from the Australian Research Council as an ARC Laureate Fellow.</span></em></p>
Russia’s absurd claims about ‘smart’ landmines show it’s high time the world put limits on autonomous weapons.
Toby Walsh, Professor of AI at UNSW, Research Group Leader, UNSW Sydney
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/180244
2022-03-29T19:13:29Z
2022-03-29T19:13:29Z
Drones over Ukraine: fears of Russian ‘killer robots’ have failed to materialise
<p>Drones have played a starring role in Ukraine’s defence against the ongoing Russian attack. Before the invasion experts believed Russia’s own fleets of “killer robots” were likely to be a far more potent weapon, but to date they have hardly been seen.</p>
<p>What’s going on? Ukraine’s drone program grew from <a href="https://aerorozvidka.xyz/">a crowd-funded group of hobbyists</a>, who appear to know and like their technology – even if it isn’t the cutting edge. Russia, on the other hand, seems to have swarms of next-generation autonomous weapons, but generals <a href="https://breakingdefense.com/2022/03/russia-has-a-military-professionalism-problem-and-it-is-costing-them-in-ukraine/">may lack faith in the technology</a>. </p>
<h2>Drone vs drone</h2>
<p>Ukraine is using Turkish Bayraktar TB2 armed drones, provided under <a href="https://finabel.org/turkey-and-ukraine-tb2-drone-agreement/">a deal inked last year</a>. Operated by a crew on the ground, these are essentially remote-controlled planes armed with rockets or missiles. Ukraine is also using commercially available drones.</p>
<p>Less is known about Russia’s drones, particularly new models with artificial intelligence (AI) capabilities. Last year, the Russian Ministry of Defence announced the creation of <a href="https://www.nationaldefensemagazine.org/articles/2021/7/20/russia-expanding-fleet-of-ai-enabled-weapons">a special AI department</a> with its own budget, which would begin its work in December 2021. </p>
<p>Just before invading Ukraine, <a href="https://nationalinterest.org/blog/reboot/russian-drone-swarm-technology-promises-aerial-minefield-capabilities-198640">Russian forces were seen testing new “swarm” drones</a>, as well as unmanned autonomous weapons capable of tracking and shooting down enemy aircraft. However, there is no evidence they have been used in Ukraine for that purpose. </p>
<p>This isn’t the first time these types of drones with lethal capability have featured on the world stage. Russia deployed “interceptor” drones to defend against hostile aircraft when it annexed Crimea in 2014; and, in 2020, Azerbaijan used drones against Armenia during the Nagorno-Karabakh conflict. And the US has committed to <a href="https://www.cbsnews.com/news/u-s-giving-ukraine-more-drones-a-surprisingly-lethal-weapon-in-the-war-against-russia-so-far/">providing Ukraine access to its highly portable “suicide drone”, the Switchblade</a>.</p>
<h2>Are drones the future of warfare?</h2>
<p>The world has been grappling with the concept of “killer drones” for more than two decades. Despite international and domestic law concerns, defence forces around the world are investing heavily in autonomous weapon technologies because they cost far less than a similar crewed weapon, like a tank or aircraft, and don’t place drivers or pilots at risk. </p>
<p>As military warfare becomes more technologically advanced than ever before, AI-powered drones are creating a new concept of power.</p>
<p><a href="https://www.brookings.edu/research/weapons-of-the-weak-russia-and-ai-driven-asymmetric-warfare/">As far back as 2017</a>, Russian President Vladimir Putin said the development of AI raises “colossal opportunities and threats that are difficult to predict”, warning that “the one who becomes the leader in this sphere will be the ruler of the world”.</p>
<p>The Russian leader predicted future wars would be fought by drones, and “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender”.</p>
<h2>Homemade drones</h2>
<p>Putin has previously identified the development of weapons with elements of AI as one of Russia’s five major military priorities.</p>
<p>Yet since Russia invaded Ukraine, it seems to be Ukrainian drones that are being used to greatest effect – predominantly by <a href="https://www.theguardian.com/world/2022/mar/28/the-drone-operators-who-halted-the-russian-armoured-vehicles-heading-for-kyiv">targeting Russian logistic elements</a> supplying fuel or ammunition to frontline forces. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/eyes-on-the-world-drones-change-our-point-of-view-and-our-truths-143838">Eyes on the world – drones change our point of view and our truths</a>
</strong>
</em>
</p>
<hr>
<p>Ukrainian soldiers have reportedly been using <a href="https://www.defenseone.com/ideas/2022/03/send-quadcopters-arm-ukrainian-citizens-simple-drones/362730/">drones bought off the shelf</a> to locate Russian military targets and to help coordinate artillery strikes. Reports have even emerged of Ukrainian soldiers <a href="https://www.wesh.com/article/ukraine-drone-enthusiasts-russian-invasion/39330353">jury-rigging explosives to homemade drones before flying them at Russian tanks</a>. </p>
<p>Footage of drone strikes are also proving a potent information weapon, with <a href="https://taskandpurpose.com/analysis/drones-ukraine-information-warfare/">Ukrainian soldiers uploading them to social media</a>. </p>
<h2>Where are Russia’s drones?</h2>
<p>It’s hard to know exactly why we haven’t seen a Russian drone onslaught.</p>
<p>One possible reason is that drones are being held in reserve for a later escalation in the conflict. Drones can deliver chemical, biological or even nuclear weapons without endangering a human pilot – and Russia’s current strategy suggests it may not shrink from using banned weapons.</p>
<p>Another possible reason is logistics. Given widespread reports of Russian military vehicles breaking down, Russia may not be able to support drone operations in Ukraine. </p>
<p><a href="https://breakingdefense.com/2022/03/russia-has-a-military-professionalism-problem-and-it-is-costing-them-in-ukraine/">According to RAND Institute experts</a>, however, one of the biggest reasons may be a lack of trust in the technology. </p>
<h2>Why is trust so important?</h2>
<p>All modern military forces involve trust: trust in subordinates to follow orders, and trust in commanders to give lawful orders. When a machine is used in the place of a human, a commander must be able to trust that machine as much as a human being. </p>
<p>This produces significant problems. Researchers have long been aware of “machine bias”: <a href="https://medium.com/whattolabel/bias-in-machine-learning-d15ebee7db45">the idea that we trust machines to make decisions, simply because they’re machines</a>. Yet misplaced trust in machines – especially if they are making life-and-death decisions – can have catastrophic results. </p>
<p>One way to improve trust in military drones could be to limit them to simple roles. A drone acting simply as an airborne camera can’t fake what it sees, whereas a drone scanning video footage to identify targets (what the military call a “decision support system”) is far more likely to make a fatal mistake. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lethal-autonomous-weapons-and-world-war-iii-its-not-too-late-to-stop-the-rise-of-killer-robots-165822">Lethal autonomous weapons and World War III: it's not too late to stop the rise of 'killer robots'</a>
</strong>
</em>
</p>
<hr>
<p>Another way to improve trust in drones is to refuse to arm them with lethal weapons, or program them to disarm enemy soldiers. In 2007, John Canning, a researcher at the Naval Surface Warfare Center, suggested <a href="https://www.theregister.com/2007/04/13/i_robowarrior/">future autonomous weapons might attack rifles or ammunition instead of attacking the human holding them</a>.</p>
<p>In the age of autonomous warfare, the limit will be how far we trust machines. As lethal drones become more common and familiar, how satisfied are we that these drones will make the right decisions? To use these weapons we will need to trust them, but first we will need to make sure that trust is justified.</p><img src="https://counter.theconversation.com/content/180244/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brendan Walker-Munro receives funding from the Australian Government through Trusted Autonomous Systems, a Defence Cooperative Research Centre funded through the Next Generation Technologies Fund. </span></em></p>
Russia has sophisticated drone capabilities, but generals may not trust the technology enough to use it.
Brendan Walker-Munro, Senior Research Fellow, The University of Queensland
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/173616
2021-12-20T13:14:50Z
2021-12-20T13:14:50Z
UN fails to agree on ‘killer robot’ ban as nations pour billions into autonomous weapons research
<figure><img src="https://images.theconversation.com/files/436998/original/file-20211210-27-1o7cvsn.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5763%2C4225&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Humanitarian groups have been calling for a ban on autonomous weapons.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/march-2019-berlin-a-robot-stands-in-front-of-the-news-photo/1131801019">Wolfgang Kumm/picture alliance via Getty Images</a></span></figcaption></figure><p>Autonomous weapon systems – commonly known as killer robots – may have <a href="https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d">killed human beings for the first time ever</a> last year, according to a recent United Nations Security Council <a href="https://undocs.org/S/2021/229">report on the Libyan civil war</a>. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.</p>
<p>The United Nations <a href="https://www.un.org/disarmament/the-convention-on-certain-conventional-weapons/">Convention on Certain Conventional Weapons</a> debated the question of banning autonomous weapons at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but <a href="https://www.reuters.com/article/us-un-disarmament-idAFKBN2IW1UJ">didn’t reach consensus on a ban</a>. Established in 1983, the convention has been updated regularly to restrict some of the world’s cruelest conventional weapons, including land mines, booby traps and incendiary weapons.</p>
<p>Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are <a href="https://www.newsweek.com/2021/09/24/us-only-nation-ethical-standards-ai-weapons-should-we-afraid-1628986.html">investing heavily</a> in autonomous weapons research and development. The U.S. alone <a href="https://www.scientificamerican.com/article/dont-let-robots-pull-the-trigger/">budgeted US$18 billion</a> for autonomous weapons between 2016 and 2020. </p>
<p>Meanwhile, human rights and <a href="https://www.stopkillerrobots.org/">humanitarian organizations</a> are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, <a href="https://www.rand.org/blog/2020/06/the-risks-of-autonomous-weapons-systems-for-crisis.html">increasing the risk of preemptive attacks</a>, and because they could be <a href="https://foreignpolicy.com/2020/10/14/ai-drones-swarms-killer-robots-partial-ban-on-autonomous-weapons-would-make-everyone-safer/">combined with chemical, biological, radiological and nuclear weapons</a> themselves. </p>
<p>As a <a href="https://scholar.google.com/citations?user=92kUNgwAAAAJ&hl=en&oi=sra">specialist in human rights</a> with a focus on the <a href="https://muse.jhu.edu/article/761349#bio_wrap">weaponization of artificial intelligence</a>, I find that autonomous weapons make the unsteady balances and fragmented safeguards of the nuclear world – for example, the U.S. president’s minimally constrained <a href="https://wwnorton.com/books/thermonuclear-monarchy/">authority to launch a strike</a> – more unsteady and more fragmented. Given the pace of research and development in autonomous weapons, the U.N. meeting might have been the last chance to head off an arms race.</p>
<h2>Lethal errors and black boxes</h2>
<p>I see four primary dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat? </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/fPqmC16ewYg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Killer robots, like the drones in the 2017 short film ‘Slaughterbots,’ have long been a major subgenre of science fiction. (Warning: graphic depictions of violence.)</span></figcaption>
</figure>
<p>The problem here is not that machines will make such errors and humans won’t. It’s that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent <a href="https://www.reuters.com/world/asia-pacific/us-military-says-10-civilians-killed-kabul-drone-strike-last-month-2021-09-17/">U.S. drone strike in Afghanistan</a> seem like mere rounding errors by comparison.</p>
<p>Autonomous weapons expert Paul Scharre uses the metaphor of <a href="https://wwnorton.com/books/Army-of-None/">the runaway gun</a> to explain the difference. A runaway gun is a defective machine gun that continues to fire after a trigger is released. The gun continues to fire until ammunition is depleted because, so to speak, the gun does not know it is making an error. Runaway guns are extremely dangerous, but fortunately they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Autonomous weapons, by definition, have no such safeguard. </p>
<p>Importantly, weaponized AI need not even be defective to produce the runaway gun effect. As multiple studies on algorithmic errors across industries have shown, the very best algorithms – operating as designed – can <a href="https://brianchristian.org/the-alignment-problem/">generate internally correct outcomes that nonetheless spread terrible errors</a> rapidly across populations. </p>
<p>For example, a neural net designed for use in Pittsburgh hospitals identified <a href="https://www.pulmonologyadvisor.com/home/topics/practice-management/the-potential-pitfalls-of-machine-learning-algorithms-in-medicine/">asthma as a risk-reducer</a> in pneumonia cases; image recognition software used by Google <a href="https://www.usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-black-people-as-gorillas/29567465/">identified Black people as gorillas</a>; and a machine-learning tool used by Amazon to rank job candidates <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G">systematically assigned negative scores to women</a>.</p>
<p>The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them. The <a href="https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf">black box problem</a> of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems. </p>
<h2>The proliferation problems</h2>
<p>The next two dangers are the problems of low-end and high-end proliferation. Let’s start with the low end. The militaries developing autonomous weapons now are proceeding on the assumption that they will be able to <a href="https://www.popularmechanics.com/military/research/a23133118/us-ai-robots-warfare/">contain and control the use of autonomous weapons</a>. But if the history of weapons technology has taught the world anything, it’s this: Weapons spread. </p>
<p>Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the <a href="https://www.npr.org/templates/story/story.php?storyId=6539945">Kalashnikov assault rifle</a>: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Front view of a quadcopter showing its camera" src="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=289&fit=crop&dpr=1 600w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=289&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=289&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=364&fit=crop&dpr=1 754w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=364&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=364&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:STM_Kargu.png">Ministry of Defense of Ukraine</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>High-end proliferation is just as bad, however. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of <a href="https://cpr.unu.edu/publications/articles/ai-global-governance-ai-and-nuclear-weapons-promise-and-perils-of-ai-for-nuclear-stability.html">mounting chemical, biological, radiological and nuclear arms</a>. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use.</p>
<p>High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. The weapons are likely to be equipped with expensive <a href="https://smartech.gatech.edu/bitstream/handle/1853/31465/09-02.pdf">ethical governors</a> designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the “<a href="https://news.un.org/en/story/2020/07/1068041">myth of a surgical strike</a>” to quell moral protests. Autonomous weapons will also reduce both the need for and risk to one’s own soldiers, dramatically altering the <a href="https://www.jstor.org/stable/3312365?seq=1#metadata_info_tab_contents">cost-benefit analysis</a> that nations undergo while launching and maintaining wars. </p>
<p>Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. Think about the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the <a href="https://dx.doi.org/10.2139/ssrn.3804885">blowback experienced around the world today</a>. Multiply that by every country currently aiming for high-end autonomous weapons. </p>
<h2>Undermining the laws of war</h2>
<p>Finally, autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties reaching as far back as the 1864 <a href="https://www.law.cornell.edu/wex/geneva_conventions_and_their_additional_protocols">Geneva Convention</a>, are the international thin blue line separating war with honor from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. A prominent example of someone held to account is <a href="https://www.britannica.com/biography/Slobodan-Milosevic">Slobodan Milosevic</a>, former president of the Federal Republic of Yugoslavia, who was indicted on charges of crimes against humanity and war crimes by the U.N.’s International Criminal Tribunal for the Former Yugoslavia.</p>
<p>[<em>Get our best science, health and technology stories.</em> <a href="https://theconversation.com/us/newsletters/science-editors-picks-71/?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=science-best">Sign up for The Conversation’s science newsletter</a>.]</p>
<p>But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious <a href="https://www.hrw.org/news/2020/06/01/need-and-elements-new-treaty-fully-autonomous-weapons#">accountability gap</a>.</p>
<p>To hold a soldier <a href="https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=1011&context=djilp">criminally responsible</a> for deploying an autonomous weapon that commits war crimes, prosecutors would need to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult as a matter of law, and possibly unjust as a matter of morality, given that autonomous weapons are inherently unpredictable. I believe the distance separating the soldier from the independent decisions made by autonomous weapons in rapidly evolving environments is simply too great. </p>
<p>The legal and moral challenge is not made easier by shifting the blame up the chain of command or back to the site of production. In a world without regulations that mandate <a href="https://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/">meaningful human control</a> of autonomous weapons, there will be war crimes with no war criminals to hold accountable. The structure of the laws of war, along with their deterrent value, will be significantly weakened.</p>
<h2>A new global arms race</h2>
<p>Imagine a world in which militaries, insurgent groups and international and domestic terrorists can deploy theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal accountability. It is a world where the sort of unavoidable <a href="https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815">algorithmic errors</a> that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.</p>
<p>[<em>Over 140,000 readers rely on The Conversation’s newsletters to understand the world.</em> <a href="https://memberservices.theconversation.com/newsletters/?source=inline-140ksignup">Sign up today</a>.]</p>
<p>In my view, the world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.</p>
<p><em>This is an updated version of an <a href="https://theconversation.com/an-autonomous-robot-may-have-already-killed-people-heres-how-the-weapons-could-be-more-destabilizing-than-nukes-168049">article</a> originally published on September 29, 2021.</em></p><img src="https://counter.theconversation.com/content/173616/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James Dawes does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Sci-fi nightmares of a robot apocalypse aside, autonomous weapons are a very real threat to humanity. An expert on the weapons explains how the emerging arms race could be humanity’s last.
James Dawes, Professor of English, Macalester College
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/168049
2021-09-29T12:23:40Z
2021-09-29T12:23:40Z
An autonomous robot may have already killed people – here’s how the weapons could be more destabilizing than nukes
<figure><img src="https://images.theconversation.com/files/423433/original/file-20210927-21-wsi2zg.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4928%2C3280&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The term 'killer robot' often conjures images of Terminator-like humanoid robots. Militaries around the world are working on autonomous machines that are less scary looking but no less lethal.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:W-MUTT_-_Ship-to-Shore_Maneuver_Exploration_and_Experimentation_2017_01.jpg">John F. Williams/U.S. Navy</a></span></figcaption></figure><p><em>An updated version of this article was published on Dec. 20, 2021. <a href="https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616">Read it here</a>.</em></p>
<p>Autonomous weapon systems – commonly known as killer robots – may have <a href="https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d">killed human beings for the first time ever</a> last year, according to a recent United Nations Security Council <a href="https://undocs.org/S/2021/229">report on the Libyan civil war</a>. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.</p>
<p>Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are <a href="https://www.newsweek.com/2021/09/24/us-only-nation-ethical-standards-ai-weapons-should-we-afraid-1628986.html">investing heavily</a> in autonomous weapons research and development. The U.S. alone <a href="https://www.scientificamerican.com/article/dont-let-robots-pull-the-trigger/">budgeted US$18 billion</a> for autonomous weapons between 2016 and 2020. </p>
<p>Meanwhile, human rights and <a href="https://www.stopkillerrobots.org/">humanitarian organizations</a> are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, <a href="https://www.rand.org/blog/2020/06/the-risks-of-autonomous-weapons-systems-for-crisis.html">increasing the risk of preemptive attacks</a>, and because they could become <a href="https://foreignpolicy.com/2020/10/14/ai-drones-swarms-killer-robots-partial-ban-on-autonomous-weapons-would-make-everyone-safer/">combined with chemical, biological, radiological and nuclear weapons</a> themselves. </p>
<p>As a <a href="https://scholar.google.com/citations?user=92kUNgwAAAAJ&hl=en&oi=sra">specialist in human rights</a> with a focus on the <a href="https://muse.jhu.edu/article/761349#bio_wrap">weaponization of artificial intelligence</a>, I find that autonomous weapons make the unsteady balances and fragmented safeguards of the nuclear world – for example, the U.S. president’s minimally constrained <a href="https://wwnorton.com/books/thermonuclear-monarchy/">authority to launch a strike</a> – more unsteady and more fragmented.</p>
<h2>Lethal errors and black boxes</h2>
<p>I see four primary dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat? </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/fPqmC16ewYg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Killer robots, like the drones in the 2017 short film ‘Slaughterbots,’ have long been a major subgenre of science fiction. (Warning: graphic depictions of violence.)</span></figcaption>
</figure>
<p>The problem here is not that machines will make such errors and humans won’t. It’s that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent <a href="https://www.reuters.com/world/asia-pacific/us-military-says-10-civilians-killed-kabul-drone-strike-last-month-2021-09-17/">U.S. drone strike in Afghanistan</a> seem like mere rounding errors by comparison.</p>
<p>Autonomous weapons expert Paul Scharre uses the metaphor of <a href="https://wwnorton.com/books/Army-of-None/">the runaway gun</a> to explain the difference. A runaway gun is a defective machine gun that continues to fire after a trigger is released. The gun continues to fire until ammunition is depleted because, so to speak, the gun does not know it is making an error. Runaway guns are extremely dangerous, but fortunately they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Autonomous weapons, by definition, have no such safeguard. </p>
<p>Importantly, weaponized AI need not even be defective to produce the runaway gun effect. As multiple studies on algorithmic errors across industries have shown, the very best algorithms – operating as designed – can <a href="https://brianchristian.org/the-alignment-problem/">generate internally correct outcomes that nonetheless spread terrible errors</a> rapidly across populations. </p>
<p>For example, a neural net designed for use in Pittsburgh hospitals identified <a href="https://www.pulmonologyadvisor.com/home/topics/practice-management/the-potential-pitfalls-of-machine-learning-algorithms-in-medicine/">asthma as a risk-reducer</a> in pneumonia cases; image recognition software used by Google <a href="https://www.usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-black-people-as-gorillas/29567465/">identified African Americans as gorillas</a>; and a machine-learning tool used by Amazon to rank job candidates <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G">systematically assigned negative scores to women</a>.</p>
<p>The problem is not just that when AI systems err, they err in bulk. It is that when they err, their makers often don’t know why they did and, therefore, how to correct them. The <a href="https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf">black box problem</a> of AI makes it almost impossible to imagine morally responsible development of autonomous weapons systems. </p>
<h2>The proliferation problems</h2>
<p>The next two dangers are the problems of low-end and high-end proliferation. Let’s start with the low end. The militaries developing autonomous weapons now are proceeding on the assumption that they will be able to <a href="https://www.popularmechanics.com/military/research/a23133118/us-ai-robots-warfare/">contain and control the use of autonomous weapons</a>. But if the history of weapons technology has taught the world anything, it’s this: Weapons spread. </p>
<p>Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the <a href="https://www.npr.org/templates/story/story.php?storyId=6539945">Kalashnikov assault rifle</a>: killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Front view of a quadcopter showing its camera" src="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=289&fit=crop&dpr=1 600w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=289&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=289&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=364&fit=crop&dpr=1 754w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=364&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/423428/original/file-20210927-17-1kqlqer.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=364&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence for finding and tracking targets, and might have been used autonomously in the Libyan civil war to attack people.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:STM_Kargu.png">Ministry of Defense of Ukraine</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>High-end proliferation is just as bad, however. Nations could compete to develop increasingly devastating versions of autonomous weapons, including ones capable of <a href="https://cpr.unu.edu/publications/articles/ai-global-governance-ai-and-nuclear-weapons-promise-and-perils-of-ai-for-nuclear-stability.html">mounting chemical, biological, radiological and nuclear arms</a>. The moral dangers of escalating weapon lethality would be amplified by escalating weapon use.</p>
<p>High-end autonomous weapons are likely to lead to more frequent wars because they will decrease two of the primary forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. The weapons are likely to be equipped with expensive <a href="https://smartech.gatech.edu/bitstream/handle/1853/31465/09-02.pdf">ethical governors</a> designed to minimize collateral damage, using what U.N. Special Rapporteur Agnes Callamard has called the <a href="https://news.un.org/en/story/2020/07/1068041">“myth of a surgical strike”</a> to quell moral protests. Autonomous weapons will also reduce both the need for and risk to one’s own soldiers, dramatically altering the <a href="https://www.jstor.org/stable/3312365?seq=1#metadata_info_tab_contents">cost-benefit analysis</a> that nations undergo while launching and maintaining wars. </p>
<p>Asymmetric wars – that is, wars waged on the soil of nations that lack competing technology – are likely to become more common. Think about the global instability caused by Soviet and U.S. military interventions during the Cold War, from the first proxy war to the <a href="https://dx.doi.org/10.2139/ssrn.3804885">blowback</a> experienced around the world today. Multiply that by every country currently aiming for high-end autonomous weapons. </p>
<h2>Undermining the laws of war</h2>
<p>Finally, autonomous weapons will undermine humanity’s final stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties reaching as far back as the 1864 <a href="https://www.law.cornell.edu/wex/geneva_conventions_and_their_additional_protocols">Geneva Convention</a>, are the international thin blue line separating war with honor from massacre. They are premised on the idea that people can be held accountable for their actions even during wartime, that the right to kill other soldiers during combat does not give the right to murder civilians. A prominent example of someone held to account is <a href="https://www.britannica.com/biography/Slobodan-Milosevic">Slobodan Milosevic</a>, former president of the Federal Republic of Yugoslavia, who was indicted on charges against humanity and war crimes by the U.N.’s International Criminal Tribunal for the Former Yugoslavia.</p>
<p>But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious <a href="https://www.hrw.org/news/2020/06/01/need-and-elements-new-treaty-fully-autonomous-weapons#">accountability gap.</a> </p>
<p>To hold a soldier <a href="https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=1011&context=djilp">criminally responsible</a> for deploying an autonomous weapon that commits war crimes, prosecutors would need to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult as a matter of law, and possibly unjust as a matter of morality, given that autonomous weapons are inherently unpredictable. I believe the distance separating the soldier from the independent decisions made by autonomous weapons in rapidly evolving environments is simply too great. </p>
<p>The legal and moral challenge is not made easier by shifting the blame up the chain of command or back to the site of production. In a world without regulations that mandate <a href="https://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/">meaningful human control</a> of autonomous weapons, there will be war crimes with no war criminals to hold accountable. The structure of the laws of war, along with their deterrent value, will be significantly weakened.</p>
<h2>A new global arms race</h2>
<p>Imagine a world in which militaries, insurgent groups and international and domestic terrorists can deploy theoretically unlimited lethal force at theoretically zero risk at times and places of their choosing, with no resulting legal accountability. It is a world where the sort of unavoidable <a href="https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815">algorithmic errors</a> that plague even tech giants like Amazon and Google can now lead to the elimination of whole cities.</p>
<p>In my view, the world should not repeat the catastrophic mistakes of the nuclear arms race. It should not sleepwalk into dystopia.</p>
<p>[<em>Get our best science, health and technology stories.</em> <a href="https://theconversation.com/us/newsletters/science-editors-picks-71/?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=science-best">Sign up for The Conversation’s science newsletter</a>.]</p><img src="https://counter.theconversation.com/content/168049/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James Dawes does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Sci-fi nightmares of a robot apocalypse aside, autonomous weapons are a very real threat to humanity. An expert on the weapons explains how the emerging arms race could be humanity’s last.
James Dawes, Professor of English, Macalester College
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/166168
2021-08-19T02:28:03Z
2021-08-19T02:28:03Z
New Zealand could take a global lead in controlling the development of ‘killer robots’ — so why isn’t it?
<figure><img src="https://images.theconversation.com/files/416654/original/file-20210818-27-1ppaodo.jpg?ixlib=rb-1.1.0&rect=8%2C0%2C5982%2C3853&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>“New Zealand versus the killer robots” might sound like a science fiction B-movie, but that was essentially the focus of an event at parliament earlier this month.</p>
<p>Hosted by Minister of Disarmament and Arms Control Phil Twyford, the “<a href="https://www.beehive.govt.nz/speech/remarks-dialogue-autonomous-weapons-systems-and-human-control">Dialogue on Autonomous Weapons Systems and Human Control</a>” looked at how New Zealand might take more of an international lead in regulating these highly contentious new technologies.</p>
<p>Twyford warned of the danger of warfare “delegated to machines”. He referred to a <a href="http://www.converge.org.nz/pma/nz-kr-survey.pdf">recent survey</a> showing widespread public opposition to the deployment of autonomous weapons in war and strong support for government action to ban or limit their development and use.</p>
<p>The prospect of New Zealand’s leadership has been warmly received by activists and campaigners involved in the “killer robots” debate. </p>
<p>Human Rights Watch’s Mary Wareham <a href="https://www.newsroom.co.nz/politics/pace-picks-up-in-the-war-against-killer-robots">has argued</a> New Zealand leadership could act as “a total catalyst for action”, while the Campaign to Stop Killer Robots listed Twyford’s commitment as one of the “<a href="https://www.stopkillerrobots.org/about/">key actions and achievements</a>” of its campaign to date.</p>
<p>Yet New Zealand has not joined the 30 states that have formally called for a <a href="https://www.pgaction.org/declaration-support-treaty-prohibition-faw.html">ban on autonomous weapons</a>, and Twyford’s statements have tended to waver between bullish and reserved. During the event at parliament he acknowledged the clear ethical problems with autonomous weapons, but also the complexity of making policy. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1425001555355312132"}"></div></p>
<h2>Sensitivity to military allies</h2>
<p>If the mood of the people and government of New Zealand is strongly behind regulation, what makes the issue so difficult? </p>
<p>The short answer is politics and economics. A <a href="https://www.newsroom.co.nz/politics/pace-picks-up-in-the-war-against-killer-robots">major obstacle</a> for Twyford is allowing the New Zealand Defence Force to work with allies and partners. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lethal-autonomous-weapons-and-world-war-iii-its-not-too-late-to-stop-the-rise-of-killer-robots-165822">Lethal autonomous weapons and World War III: it's not too late to stop the rise of 'killer robots'</a>
</strong>
</em>
</p>
<hr>
<p>Both the US and Australia are heavily invested in pursuing cutting-edge military technologies, including robotics, artificial intelligence and autonomy. A key pillar of their strategy is building systems that <a href="https://sldinfo.com/2020/02/shaping-an-australian-navy-approach-to-maritime-remotes-artificial-intelligence-and-combat-grids/">allow more coordination</a> on the battlefield.</p>
<p>Leading a movement to have these systems regulated or banned could see New Zealand’s military shut out of joint exercises where such technologies are being trialled or used.</p>
<p>Given the <a href="https://www.rnz.co.nz/news/political/441936/where-will-new-zealand-stand-in-rising-tensions-between-china-and-other-allies">political pressure</a> to take a stronger stand against China, it seems unlikely New Zealand’s Foreign Affairs and Trade or Defence ministries will want to risk further discord with key defence partners.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1425967055489028099"}"></div></p>
<h2>Protecting high-tech industry</h2>
<p>The second hurdle lies in the economic promise of technologies developed in New Zealand that could potentially be used in autonomous weapons programmes elsewhere. </p>
<p>Many leading engineers and technologists have advocated for the <a href="https://futureoflife.org/ai-open-letter/">regulation or banning</a> of autonomous weapons, but others are attracted by the potential rewards of military-related projects. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/killer-robots-free-will-and-the-illusion-of-control-87460">Killer robots, free will and the illusion of control</a>
</strong>
</em>
</p>
<hr>
<p>These tensions have <a href="https://www.newshub.co.nz/home/politics/2021/06/rocket-lab-not-evil-but-kiwis-right-to-feel-uneasy-about-us-military-ties-journalist.html">already surfaced</a> in the debate about US military payloads being launched from New Zealand by US-owned aerospace company Rocket Lab. </p>
<p>Autonomous weapons could well see similar questions raised about other technologies developed by New Zealand companies or researchers — most obviously in the fields of computer vision, robotics and swarm intelligence — that could be used in military systems. </p>
<p>Regulating autonomous weapons without also inhibiting potentially lucrative AI and robotics research and development remains a challenge.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1425741065051537413"}"></div></p>
<h2>Public opinion not enough</h2>
<p>The hope that regulation of autonomous weapons could represent another “anti-nuclear moment” in New Zealand’s disarmament and foreign policy history therefore seems premature. </p>
<p>While it’s clear there is support for some form of regulation, there’s <a href="https://www.scoop.co.nz/stories/PO2008/S00133/killer-robots-growing-support-for-ban-but-new-zealands-stance-remains-weak.htm">little evidence</a> at this stage to suggest public opinion will sway the government’s current conservative and watchful position.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-has-already-been-weaponised-and-it-shows-why-we-should-ban-killer-robots-102736">AI has already been weaponised – and it shows why we should ban 'killer robots'</a>
</strong>
</em>
</p>
<hr>
<p>So, what should be done? In the absence of international agreement, New Zealand could press ahead with its own domestic legislation to regulate these technologies, as proposed in a <a href="https://www.parliament.nz/en/pb/petitions/document/PET_104114/petition-of-edwina-hughes-for-aotearoa-new-zealand-campaign">petition</a> from local Campaign to Stop Killer Robots coordinator Edwina Hughes. </p>
<p>This has the potential to expose a lack of serious commitment to principle in the government’s position, but it would still come up against the political and economic interests opposed to action on autonomous weapons.</p>
<p>Acknowledging those political and economic obstacles is a critical first step for meaningful public debate.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/never-mind-killer-robots-even-the-good-ones-are-scarily-unpredictable-82963">Never mind killer robots – even the good ones are scarily unpredictable</a>
</strong>
</em>
</p>
<hr>
<h2>Engagement and transparency the key</h2>
<p>In the near term, a stocktaking exercise should be undertaken to understand what research and development is being carried out in New Zealand universities and companies. </p>
<p>Efforts should also be made to understand which autonomous technologies are likely to be developed and possibly deployed in the coming years by New Zealand’s major defence partners, particularly Australia and the US. </p>
<p>Serious, sustained dialogue with commercial interests and defence partners is a necessary precondition for the advancement of Twyford’s agenda. While there is <a href="http://www.converge.org.nz/pma/nz-gge,5aug21.pdf">some evidence</a> this work is underway, it needs greater transparency to ensure public understanding of what’s at stake. </p>
<p>Without that, New Zealand will probably struggle to take an international leadership role on this critical issue.</p><img src="https://counter.theconversation.com/content/166168/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jeremy Moses receives funding from The Royal Society of New Zealand Marsden Fund. </span></em></p><p class="fine-print"><em><span>Geoffrey Ford receives funding from the Royal Society of New Zealand Marsden Fund. </span></em></p><p class="fine-print"><em><span>Sian Troath receives funding from The Royal Society of New Zealand Marsden Fund.</span></em></p>
New Zealanders are worried about autonomous weapons. But military alliances with the US and Australia, and potential economic gains from local robotics research, mean NZ won’t yet take a tough stand.
Jeremy Moses, Associate Professor in International Relations, University of Canterbury
Geoffrey Ford, Lecturer in Digital Humanities / Postdoctoral Fellow in Political Science and International Relations, University of Canterbury
Sian Troath, Postdoctoral fellow, University of Canterbury
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/165822
2021-08-12T02:12:07Z
2021-08-12T02:12:07Z
Lethal autonomous weapons and World War III: it’s not too late to stop the rise of ‘killer robots’
<figure><img src="https://images.theconversation.com/files/415601/original/file-20210811-13-fvcs86.jpg?ixlib=rb-1.1.0&rect=615%2C0%2C1023%2C675&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The STM Kargu attack drone.</span> <span class="attribution"><a class="source" href="https://www.stm.com.tr/en/kargu-autonomous-tactical-multi-rotor-attack-uav">STM</a></span></figcaption></figure><p>Last year, according to a <a href="https://undocs.org/S/2021/229">United Nations report</a> published in March, Libyan government forces hunted down rebel forces using “lethal autonomous weapons systems” that were “programmed to attack targets without requiring data connectivity between the operator and the munition”. The deadly drones were <a href="https://www.stm.com.tr/en/kargu-autonomous-tactical-multi-rotor-attack-uav">Turkish-made quadcopters</a> about the size of a dinner plate, capable of delivering a warhead weighing a kilogram or so. </p>
<p>Artificial intelligence researchers like me have been <a href="https://futureoflife.org/open-letter-autonomous-weapons/">warning</a> of the advent of such lethal autonomous weapons systems, which can make life-or-death decisions without human intervention, for years. A <a href="https://iview.abc.net.au/video/NC2103H026S00">recent episode of 4 Corners</a> reviewed this and many other risks posed by developments in AI.</p>
<p>Around 50 countries are <a href="https://www.hrw.org/news/2021/08/02/killer-robots-urgent-need-fast-track-talks">meeting</a> at the UN offices in Geneva this week in the latest attempt to hammer out a treaty to prevent the proliferation of these killer devices. History shows such treaties are needed, and that they can work.</p>
<h2>The lesson of nuclear weapons</h2>
<p>Scientists are pretty good at warning of the dangers facing the planet. Unfortunately, society is less good at paying attention.</p>
<p>In August 1945, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki, killing up to 200,000 civilians. Japan surrendered days later. The second world war was over, and the Cold War began.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/world-politics-explainer-the-atomic-bombings-of-hiroshima-and-nagasaki-100452">World politics explainer: The atomic bombings of Hiroshima and Nagasaki</a>
</strong>
</em>
</p>
<hr>
<p>The world still lives today under the threat of nuclear destruction. On a dozen or so occasions since then, we have come within minutes of all-out nuclear war.</p>
<p>Well before the first test of a nuclear bomb, many scientists working on the Manhattan Project were concerned about such a future. A <a href="https://www.atomicheritage.org/key-documents/szilard-petition">secret petition</a> was sent to President Harry S. Truman in July 1945. It accurately predicted the future:</p>
<blockquote>
<p>The development of atomic power will provide the nations with new means of destruction. The atomic bombs at our disposal represent only the first step in this direction, and there is almost no limit to the destructive power which will become available in the course of their future development. Thus a nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale.</p>
<p>If after this war a situation is allowed to develop in the world which permits rival powers to be in uncontrolled possession of these new means of destruction, the cities of the United States as well as the cities of other nations will be in continuous danger of sudden annihilation. All the resources of the United States, moral and material, may have to be mobilized to prevent the advent of such a world situation …</p>
</blockquote>
<p>Billions of dollars have since been spent on nuclear arsenals that maintain the threat of mutually assured destruction, the “continuous danger of sudden annihilation” that the physicists warned about in July 1945.</p>
<h2>A warning to the world</h2>
<p>Six years ago, thousands of my colleagues issued a <a href="https://futureoflife.org/open-letter-autonomous-weapons/">similar warning</a> about a new threat. Only this time, the petition wasn’t secret. The world wasn’t at war. And the technologies weren’t being developed in secret. Nevertheless, they pose a similar threat to global stability.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/open-letter-we-must-stop-killer-robots-before-they-are-built-44577">Open letter: we must stop killer robots before they are built</a>
</strong>
</em>
</p>
<hr>
<p>The threat comes this time from artificial intelligence, and in particular the development of lethal autonomous weapons: weapons that can identify, track and destroy targets without human intervention. The media often like to call them “killer robots”.</p>
<p>Our open letter to the UN carried a stark warning.</p>
<blockquote>
<p>The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable. The endpoint of such a technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.</p>
</blockquote>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/worlds-deadliest-inventor-mikhail-kalashnikov-and-his-ak-47-126253">World's deadliest inventor: Mikhail Kalashnikov and his AK-47</a>
</strong>
</em>
</p>
<hr>
<p>Strategically, autonomous weapons are a military dream. They let a military scale its operations unhindered by manpower constraints. One programmer can command hundreds of autonomous weapons. An army can take on the riskiest of missions without endangering its own soldiers. </p>
<h2>Nightmare swarms</h2>
<p>There are many reasons, however, why the military’s dream of lethal autonomous weapons will turn into a nightmare. First and foremost, there is a strong moral argument against killer robots. We give up an essential part of our humanity if we hand to a machine the decision of whether a person should live or die. </p>
<p>Beyond the moral arguments, there are many technical and legal reasons to be concerned about killer robots. One of the strongest is that they will revolutionise warfare. Autonomous weapons will be weapons of immense destruction. </p>
<p>Previously, if you wanted to do harm, you had to have an army of soldiers to wage war. You had to persuade this army to follow your orders. You had to train them, feed them and pay them. Now just one programmer could control hundreds of weapons.</p>
<p>In some ways lethal autonomous weapons are even more troubling than nuclear weapons. To build a nuclear bomb requires considerable technical sophistication. You need the resources of a nation state, skilled physicists and engineers, and access to scarce raw materials such as uranium and plutonium. As a result, nuclear weapons have not proliferated greatly. </p>
<p>Autonomous weapons require none of this, and if produced they will likely become cheap and plentiful. They will be perfect weapons of terror. </p>
<p>Can you imagine how terrifying it will be to be chased by a swarm of autonomous drones? Can you imagine such drones in the hands of terrorists and rogue states with no qualms about turning them on civilians? They will be an ideal weapon with which to suppress a civilian population. Unlike humans, they will not hesitate to commit atrocities, even genocide.</p>
<h2>Time for a treaty</h2>
<p>We stand at a crossroads on this issue. It needs to be seen as morally unacceptable for machines to decide who lives and who dies. And for the diplomats at the UN to negotiate a treaty limiting their use, just as we have treaties to limit chemical, biological and other weapons. In this way, we may be able to save ourselves and our children from this terrible future.</p><img src="https://counter.theconversation.com/content/165822/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh is a Laureate Fellow and Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, Australia. He is a Fellow of the Australian Academy of Science and author of the recent book, “2062: The World that AI Made” that explores the impact AI will have on society, including the impact on war. </span></em></p>
Like atomic bombs and chemical and biological weapons, deadly drones that make their own decisions must be tightly controlled by an international treaty.
Toby Walsh, Professor of AI at UNSW, Research Group Leader, UNSW Sydney
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/160442
2021-05-13T03:37:39Z
2021-05-13T03:37:39Z
The Mitchells vs The Machines shows ‘smart’ tech might be less of a threat to family bonds than we fear
<figure><img src="https://images.theconversation.com/files/400158/original/file-20210512-13-jh49x8.jpg?ixlib=rb-1.1.0&rect=0%2C11%2C3996%2C2141&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Netflix</span></span></figcaption></figure><p>Robots have fascinated cinema-goers ever since Fritz Lang’s 1927 expressionist silent film <a href="https://www.imdb.com/title/tt0017136/?ref_=fn_al_tt_1">Metropolis</a>. The German dystopia film portrays a near future where a female robot (a “gynoid”) is built as an evil twin of Maria, a woman trying to unionise the workforce. The robot Maria wreaks havoc, turning the workers against each other, inciting murder and the destruction of the machines powering the city. </p>
<p>The portrayal of robots in popular culture has always captured the technological hopes and fears of the day, veering between hyperbolic promises and dystopian nightmares. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/399643/original/file-20210510-12-1q2d7mr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Black and white photo of sci fi robot" src="https://images.theconversation.com/files/399643/original/file-20210510-12-1q2d7mr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/399643/original/file-20210510-12-1q2d7mr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=407&fit=crop&dpr=1 600w, https://images.theconversation.com/files/399643/original/file-20210510-12-1q2d7mr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=407&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/399643/original/file-20210510-12-1q2d7mr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=407&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/399643/original/file-20210510-12-1q2d7mr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=512&fit=crop&dpr=1 754w, https://images.theconversation.com/files/399643/original/file-20210510-12-1q2d7mr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=512&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/399643/original/file-20210510-12-1q2d7mr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=512&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The original evil robot from Metropolis.</span>
<span class="attribution"><a class="source" href="https://www.imdb.com/title/tt0017136/?ref_=nv_sr_srsg_0">IMDB</a></span>
</figcaption>
</figure>
<p>This millennium, Pixar’s animated <a href="https://www.imdb.com/title/tt0910970/">WALL-E</a> (2008) gave us warm fuzzies for a friendly and lonely garbage-cleaning robot. Comedy-drama <a href="https://www.imdb.com/title/tt1990314/">Robot & Frank</a> (2012) showed a close relationship developing between an older man and his care robot. </p>
<p>The psychological thriller <a href="https://www.imdb.com/title/tt0470752/">Ex Machina</a> (2014) featured Ava, another beautiful gynoid who attracts and then attacks her human creators. British sci-fi television series <a href="https://www.imdb.com/title/tt4122068/">Humans</a> (2015-18) picked up where Steven Spielberg’s <a href="https://www.imdb.com/title/tt0212720/">A.I.</a> (2001) left off, exploring the blurred lines when humanoid robot servants join households. </p>
<p>Now we have Netflix’s <a href="https://www.imdb.com/title/tt7979580/">The Mitchells vs. the Machines</a>, where the very ordinary Mitchell family — mother Linda (Maya Rudolph), father Rick (Danny McBride), teenage daughter Katie (Abbi Jacobson) and younger son Aaron (Mike Rianda) — are forced to unite against menacing robots out to rid the Earth of humans. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/_ak5dFt8Ar0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">‘We’re the Mitchells … the only people who can save the world. Sorry about that.’</span></figcaption>
</figure>
<p>The film presents a nuanced portrayal of our relationship with machines. It makes fun of the hype and idolatry pervading digital technology, responds to our anxieties about new technology, and reminds us old tech is sometimes better than over-designed, unnecessarily “smart” devices. </p>
<h2>Eyes down</h2>
<p>While films such as Metropolis expressed fears about intelligent machines gaining control over humans, The Mitchells vs. the Machines taps into our fears of technology interfering with our relationships. </p>
<p>At family dinner, Rick begs the others to look up from their phones and make eye contact just for once. </p>
<p>Katie has gained entry to film school, and feels her father doesn’t understand her obsession with her phone and digital film-making. Rick longs to resume the kind of close relationships he had with Katie when she was a little girl. He proudly drives a battered, “non-smart” car and wants to teach his children old-fashioned survival skills. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/400161/original/file-20210512-19-qcc3xy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Production still reads 'Accept Katie Mitchell'" src="https://images.theconversation.com/files/400161/original/file-20210512-19-qcc3xy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/400161/original/file-20210512-19-qcc3xy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=324&fit=crop&dpr=1 600w, https://images.theconversation.com/files/400161/original/file-20210512-19-qcc3xy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=324&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/400161/original/file-20210512-19-qcc3xy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=324&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/400161/original/file-20210512-19-qcc3xy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=408&fit=crop&dpr=1 754w, https://images.theconversation.com/files/400161/original/file-20210512-19-qcc3xy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=408&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/400161/original/file-20210512-19-qcc3xy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=408&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Katie loves using technology to make movies, and gets herself accepted into film school.</span>
<span class="attribution"><span class="source">Netflix ©2021 SPAI. All Rights Reserved.</span></span>
</figcaption>
</figure>
<p>Directors Mike Rianda and Jeff Rowe populate The Mitchells vs. the Machines with modern household technology gone bad, until the Mitchells find themselves battling killer androids unleashed by a smartphone voice assistant named PAL (Olivia Colman) — a nod to the malicious HAL of <a href="https://www.imdb.com/title/tt0062622/">2001: A Space Odyssey</a> (1968). </p>
<p>At first glance, PAL is not fancy. She is a simple face emoticon on a smartphone display with <a href="https://theconversation.com/from-hal-9000-to-westworlds-dolores-the-pop-culture-robots-that-influenced-smart-voice-assistants-140341">a female voice</a>. However, she holds hidden depths. </p>
<p>Like the robots Will Smith battles in <a href="https://www.imdb.com/title/tt0343818/">I, Robot</a> (2004), PAL displays human feelings and autonomy. Rejected by the tech entrepreneur who created her in favour of a new line of domestic robots, PAL reprograms all robots to be killing machines, hunting down humans and sending them into space.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/from-hal-9000-to-westworlds-dolores-the-pop-culture-robots-that-influenced-smart-voice-assistants-140341">From HAL 9000 to Westworld’s Dolores: the pop culture robots that influenced smart voice assistants</a>
</strong>
</em>
</p>
<hr>
<p>One of the scariest moments is when the Mitchells and some robot allies find themselves in a shopping mall, attacked by any machine with a chip linked to the PAL network. Smart toasters, vacuum cleaners, fridges, vending machines, kettles, washing machines — and even an army of Furby toys — form a relentless robot battalion. </p>
<p>But the family’s human inventiveness and analogue survival skills help them live to fight another day.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/399644/original/file-20210510-21-12bg1hc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Animated film still" src="https://images.theconversation.com/files/399644/original/file-20210510-21-12bg1hc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/399644/original/file-20210510-21-12bg1hc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/399644/original/file-20210510-21-12bg1hc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/399644/original/file-20210510-21-12bg1hc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/399644/original/file-20210510-21-12bg1hc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=425&fit=crop&dpr=1 754w, https://images.theconversation.com/files/399644/original/file-20210510-21-12bg1hc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=425&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/399644/original/file-20210510-21-12bg1hc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=425&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Attack of the killer Furby.</span>
<span class="attribution"><a class="source" href="https://www.imdb.com/title/tt7979580/?ref_=ttmi_tt">IMDB</a></span>
</figcaption>
</figure>
<h2>Ghosts in the machines</h2>
<p>Neither humans nor their devices are free of faults in this film. </p>
<p>The humans have become dependent on their smartphones, social media and other digital devices: aspects of our everyday lives have become digitised for no apparent benefit. </p>
<p>But unlike previous portrayals — such as in <a href="https://www.netflix.com/au/title/81254224">The Social Dilemma</a> (2020) — this dependence is not seen as permanent or pathological. Here, the boundaries between “good” and “bad” technologies are blurred, as is the line between “human” and “nonhuman”. We see both human and machine ingenuity, and human and machine hubris.</p>
<p>Human action is the initiator of the robot Armageddon, when an arrogant tech mogul, Dr. Mark Bowman (Eric Andre), treats PAL badly. Bowman displays little human feeling; PAL displays all the hurt of a jilted human lover. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/will-robots-make-good-friends-scientists-are-already-starting-to-find-out-154034">Will robots make good friends? Scientists are already starting to find out</a>
</strong>
</em>
</p>
<hr>
<p>When the first audiences watched Metropolis almost a century ago, robots were the frightening stuff of future technologies not yet invented. As computer technologies began to enter homes and workplaces, films increasingly showed robots as slaves (<a href="https://www.imdb.com/title/tt0073747/">The Stepford Wives</a>), companions (<a href="https://www.imdb.com/title/tt0076759/">Star Wars</a>) and domestic helpers (<a href="https://www.imdb.com/title/tt0182789/">Bicentennial Man</a>) — or in worlds unto themselves (<a href="https://www.imdb.com/title/tt0358082/">Robots</a>).</p>
<p>In this era of <a href="https://acola.org/hs5-internet-of-things-australia/">the Internet of Things</a>, Australians own <a href="https://thewest.com.au/technology/australians-now-have-216-million-smartphones-as-internet-usage-on-gadgets-skyrockets-c-2736152">21.6 million smart phones</a>, and <a href="https://www.statista.com/outlook/dmo/smart-home/australia">32.7% of houses</a> have smart devices. </p>
<p>The Mitchells vs the Machines suggests there is a new give and take between humans and machines. Digital devices can be fun and useful, and are less of threat to our relationships than we might fear. </p>
<p>Indeed, these technologies can be more vulnerable and much less resilient than even the most dysfunctional human family.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/robots-were-dreamt-up-100-years-ago-why-havent-our-fears-about-them-changed-since-153267">Robots were dreamt up 100 years ago – why haven’t our fears about them changed since?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/160442/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Deborah Lupton receives funding from the Australian Research Council.</span></em></p>
Fictional screen robots have long represented our fear of technology. A new animated family film combines this trepidation with many parents’ fear of losing offline connection with their kids.
Deborah Lupton, SHARP Professor, leader of the Vitalities Lab, Centre for Social Research in Health and Social Policy Centre, UNSW Sydney, and leader of the UNSW Node of the ARC Centre of Excellence for Automated Decison-Making and Society, UNSW Sydney
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/147210
2020-10-15T13:01:47Z
2020-10-15T13:01:47Z
The threat of ‘killer robots’ is real and closer than you might think
<figure><img src="https://images.theconversation.com/files/363681/original/file-20201015-15-19n2btg.jpg?ixlib=rb-1.1.0&rect=22%2C11%2C3811%2C2144&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/digital-composite-video-ariel-target-over-1184819233">Media Whalestock/Shutterstock</a></span></figcaption></figure><p>From <a href="https://theconversation.com/self-driving-cars-why-we-cant-expect-them-to-be-moral-108299">self-driving cars</a>, to <a href="https://theconversation.com/curious-kids-who-is-siri-114940">digital assistants</a>, <a href="https://theconversation.com/why-ai-cant-ever-reach-its-full-potential-without-a-physical-body-146870">artificial intelligence</a> (AI) is fast becoming an integral technology in our lives today. But this same technology that can help to make our day-to-day life easier is also being incorporated into weapons for use in combat situations.</p>
<p>Weaponised AI features heavily in the security strategies of the <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12713">US, China and Russia</a>. And some existing weapons systems already include autonomous capabilities <a href="https://www.sipri.org/publications/2017/other-publications/mapping-development-autonomy-weapon-systems">based on AI</a>, developing weaponised AI further means machines could potentially make decisions to harm and kill people based on their programming, without human intervention.</p>
<p>Countries that back the use of AI weapons claim it allows them to respond to emerging threats at greater than human speed. They also say it reduces the risk to <a href="https://www.hsdl.org/?abstract&did=826737">military personnel</a> and increases the ability to hit targets with <a href="https://unog.ch/80256EDD006B8954/(httpAssets)/B2A09D0D6083CB7CC125841E0035529D/$file/CCW_GGE.1_2019_WP.5.pdf">greater precision</a>. But outsourcing use-of-force decisions to machines violates human dignity. And it’s also incompatible with <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2972071">international law</a> which requires human judgement in context.</p>
<p>Indeed, the role that humans should play in <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/5497DF9B01E5D9CFC125845E00308E44/$file/CCW_GGE.1_2019_CRP.1_Rev2.pdf">use of force</a> decisions has been an increased area of focus in many United Nations (UN) meetings. And at a recent UN meeting, states agreed that it’s unacceptable on ethical and legal grounds to delegate use-of-force decisions to machines – “<a href="http://www.article36.org/wp-content/uploads/2016/04/MHC-AI-and-AWS-FINAL.pdf">without any human control whatsoever</a>”. </p>
<p>But while this may sound like good news, there continues to be major differences in how states define “human control”. </p>
<h2>The problem</h2>
<p>A closer look at different governmental statements shows that many states, including key developers of weaponised AI such as the US and UK, favour what’s known as a <a href="https://documents.unoda.org/wp-content/uploads/2020/09/20200901-United-States.pdf">distributed perspective of human control</a>. </p>
<p>This is where human control is present across the entire life-cycle of the weapons – from development, to use and at various stages of military decision-making. But while this may sound sensible, it actually leaves a lot of room for human control to become more nebulous. </p>
<figure class="align-center ">
<img alt="Group of heavily armed military robots" src="https://images.theconversation.com/files/362676/original/file-20201009-15-19qcv8i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/362676/original/file-20201009-15-19qcv8i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/362676/original/file-20201009-15-19qcv8i.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/362676/original/file-20201009-15-19qcv8i.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/362676/original/file-20201009-15-19qcv8i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/362676/original/file-20201009-15-19qcv8i.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/362676/original/file-20201009-15-19qcv8i.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Algorithms are beginning to change the face of warfare.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/group-heavily-armed-military-robots-very-1253670499">Mykola Holyutyak/Shutterstock</a></span>
</figcaption>
</figure>
<p>Taken at face value, recognising human control as a process rather than a single decision is correct and important. And it <a href="https://unidir.org/sites/default/files/2020-03/UNIDIR_Iceberg_SinglePages_web.pdf">reflects operational reality</a>, in that there are multiple stages to how modern militaries plan attacks involving a human chain of command. But there are drawbacks to relying upon this understanding. </p>
<p>It can, for example, uphold the illusion of human control when in reality it has been relegated to situations where it does not matter as much. This risks making the overall quality of human control in warfare dubious. In that it is exerted everywhere generally and nowhere specifically. </p>
<p>This could allow states to focus more on early stages of research and development and less so on specific decisions around the use of force on the battlefield, such as distinguishing between civilians and combatants or assessing a proportional military response – which are crucial to comply with international law. </p>
<p>And while it may sound reassuring to have human control from the research and development stage, this also glosses over significant technological difficulties. Namely, that current algorithms are not <a href="https://unidir.org/publication/black-box-unlocked">predictable and understandable</a> to human operators. So even if human operators supervise systems applying such algorithms when using force, they are not able to understand how these systems have calculated targets.</p>
<h2>Life and death with data</h2>
<p>Unlike machines, human decisions to use force cannot be pre-programmed. Indeed, the brunt of international humanitarian law obligations apply to actual, specific battlefield decisions to use force, rather than to earlier stages of a weapons system’s lifecycle. This was highlighted by a member of the <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/EB97EA2C3DD0FA51C12583CB003AFED9/$file/Brazil+GGE+LAWS+2019+-+Item+5+d+-+Human+element.pdf">Brazilian delegation</a> at the recent UN meetings. </p>
<p>Adhering to international humanitarian law in the fast-changing context of warfare also requires constant human assessment. This cannot simply be done with an algorithm. This is especially the case in urban warfare, where civilians and combatants are in the same space. </p>
<p>Ultimately, to have machines that are able to make the decision to end people’s lives violates human dignity by reducing people to objects. As <a href="https://peterasaro.org/writing/Asaro%20Oxford%20AI%20Ethics%20AWS.pdf">Peter Asaro</a>, a philosopher of science and technology, argues: “Distinguishing a ‘target’ in a field of data is not recognising a human person as someone with rights.” Indeed, a machine cannot be programmed to appreciate the <a href="https://onlinelibrary.wiley.com/doi/epdf/10.1111/1758-5899.12691">value of human life</a>. </p>
<figure class="align-center ">
<img alt="Robot tank on road." src="https://images.theconversation.com/files/362683/original/file-20201009-13-16gw735.jpg?ixlib=rb-1.1.0&rect=65%2C17%2C2830%2C1914&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/362683/original/file-20201009-13-16gw735.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/362683/original/file-20201009-13-16gw735.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/362683/original/file-20201009-13-16gw735.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/362683/original/file-20201009-13-16gw735.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/362683/original/file-20201009-13-16gw735.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/362683/original/file-20201009-13-16gw735.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Russia’s ‘Platform-M’ combat robot which can be used both for patrolling and attacks.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/vladivostok-russia-july-25-2016-exhibition-610303184">Shutterstock/Goga Shutter</a></span>
</figcaption>
</figure>
<p>Many states have <a href="https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and">argued for new legal rules</a> to ensure human control over autonomous weapons systems. But a few others, including the US, hold that <a href="https://www.justsecurity.org/72610/an-enduring-impasse-on-autonomous-weapons/">existing international law is sufficient</a>. Though the uncertainty surrounding what meaningful human control actually is shows that more clarity in the form of new international law is needed. </p>
<p>This must focus on the essential qualities that make human control meaningful, while retaining human judgement in the context of specific use-of-force decisions. Without it, there’s a risk of undercutting the value of new international law aimed at <a href="https://theconversation.com/ai-has-already-been-weaponised-and-it-shows-why-we-should-ban-killer-robots-102736">curbing weaponised AI</a>. </p>
<p>This is important because without specific regulations, current practices in military decision-making will continue to shape what’s considered <a href="https://theconversation.com/ai-has-already-been-weaponised-and-it-shows-why-we-should-ban-killer-robots-102736">“appropriate”</a> – without being critically discussed.</p><img src="https://counter.theconversation.com/content/147210/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ingvild Bode receives funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 852123.</span></em></p>
Outsourcing use-of-force decisions to machines violates human dignity and is incompatible with international law.
Ingvild Bode, Associate Professor of International Relations, University of Southern Denmark
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/119397
2019-06-25T11:20:48Z
2019-06-25T11:20:48Z
Video Assistant Referee: in football, as in war, sometimes we need a human touch
<figure><img src="https://images.theconversation.com/files/281098/original/file-20190625-81780-xk7zhc.jpg?ixlib=rb-1.1.0&rect=0%2C5%2C3537%2C2349&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">German referee Felix Brych looks at a replay of the video assistant referee (VAR) during the UEFA Nations League semi final soccer match between Portugal and Switzerland, June 2019</span> <span class="attribution"><span class="source">EPA-EFE/Fernando Veludo</span></span></figcaption></figure><p>Over the past year or so, the Video Assistant Referee system, known as VAR, has been gradually rolled out to the world of football with the aim of improving the accuracy, and consistency, of refereeing decisions. However, its introduction has not been without controversy. The current <a href="https://www.bing.com/news/search?q=Women%27s+World+Cup+VAR+Controversy&qpvt=women%27s+world+cup+VAR+controversy&FORM=EWRE">Women’s World Cup has been beset by issues</a>, and there can be no escaping the public outcry at decisions that are deemed to be harsh, unfair or, in some cases, just plain wrong.</p>
<p>But while VAR is taking all the blame, the problem is not so much the system, or even the way it’s applied, but rather with the rules of the game itself – and the way those rules are applied by human decision-makers at the other end of the line. </p>
<p>As many fans have come to realise, most decisions in football are not cut-and-dried. They are subjective decisions made in real time by a human referee.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1142878756983517189"}"></div></p>
<p>By adding cameras, slow-motion replays and a strict reading of the “letter of the law”, so VAR is exposing the problems with the rules themselves, and the way no rule can ever account for every eventuality on the field of play. By improving the “accuracy” and “consistency” of decision making, so VAR is turning what was once a fluid, analogue-style decision into a strictly digital robotic question of yes-or-no, in-or-out.</p>
<p>Decisions in football simply aren’t clear-cut. Take hand ball for example. According to the <a href="http://www.thefa.com/football-rules-governance/lawsandrules/laws/football-11-11/law-12---fouls-and-misconduct">laws of the game</a>, a penalty should be awarded for a <em>deliberate</em> hand-ball in the penalty area. And yet what the VAR controversy has shown us is that it is very hard to determine what we mean by a deliberate action. If a player is competing for the ball, they are, by definition, deliberately trying to win it back. So any handball (accidental or otherwise), is deliberate in the broadest sense – even if the intent is not to commit a foul.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/DOQXSSPc-Zg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>This grey area in the rules has been exacerbated by the introduction of VAR, which has served to draw attention to “errors” on the field of play, and has stripped nuance from the decision-making process. Either a hand-ball has been committed, or it has not. With VAR, there can be no in-between.</p>
<h2>No room for error</h2>
<p>The VAR controversy reveals an issue that strikes to the very heart of debates around robots and robot ethics. By coding an ethical “decision” into a machine, there can be no nuance, and no scope to err. Take <a href="https://theconversation.com/killer-robots-already-exist-and-theyve-been-here-a-very-long-time-113941">killer robots for example</a>.</p>
<p>Were we to send out a host of such robots into battle then we would need to program them with a series of protocols to define who to kill and who to avoid. While wars may have once been fought between two (or more) clearly marked sides, it is no longer so easy to distinguish between friend and foe. Our enemies no longer wear uniforms marking them as targets, and more often than not, our enemies move among us. It therefore becomes difficult, if not impossible, to program a “killer robot” to decide between friend and foe based on uniform alone.</p>
<p>These issues become even thornier when we think about civilians and human rights. At what point does a civilian become a combatant? At what point do they become a legitimate target?</p>
<p>While many would (rightly) point to the fact that killer robots can actually make better ethical decisions in some cases, on account of the fact they do not succumb to stress, fatigue and disorientation, they still have to make a decision based on a predetermined set of codes.</p>
<p>The issue here is not that the robots don’t carry out their instructions, but rather, as <a href="https://blogs.lse.ac.uk/lsereviewofbooks/2016/05/13/the-long-read-a-theory-of-the-drone-by-gregoire-chamayou/">drone theorist Grégoire Chamayou suggests</a>, the issue is that they will never disobey, and will continue to carry out their orders exactly as programmed. This exposes a paradox at the heart of human ethics for any decision, must, by definition, be a sacrifice of all other decisions at all other times. By making a decision one way or the other, and codifying it in computer code, we are making an ethical decision for all time, that the killer robot will follow regardless.</p>
<h2>To shoot or not to shoot?</h2>
<p>This brings us back to football and the question of VAR. As the VAR controversy shows us, there is no such thing as a simple rule. While of course, rules (and indeed laws) are designed to be followed to the letter, in a strict robotic fashion, as soon as we do, so we expose the perversity of applying a universal general rule to the infinity of individual cases.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/killer-robots-already-exist-and-theyve-been-here-a-very-long-time-113941">Killer robots already exist, and they’ve been here a very long time</a>
</strong>
</em>
</p>
<hr>
<p>Thus, even something so “simple” as hand-ball in the penalty area is not quite so simple and clear cut as it first appears. These decisions then become even more problematic in a military setting. This is because targeting criteria are often fluid – and it is rarely so simple as telling a machine to target “anyone with a gun”. As the VAR controversy in football shows us, sometimes we need a human touch.</p><img src="https://counter.theconversation.com/content/119397/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mike Ryder does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Many fans think the VAR is ruining the Women’s World Cup.
Mike Ryder, Associate Lecturer, Literature and Philosophy, Lancaster University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/113941
2019-03-27T12:28:25Z
2019-03-27T12:28:25Z
Killer robots already exist, and they’ve been here a very long time
<figure><img src="https://images.theconversation.com/files/265890/original/file-20190326-36270-hurjwk.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5600%2C3150&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/3d-render-very-detailed-robot-army-1197070471">Mykola Holyutyak/Shutterstock</a></span></figcaption></figure><p>Humans will always make the final decision on whether armed robots can shoot, <a href="https://www.defenseone.com/technology/2019/03/us-military-changing-killing-machine-robo-tank-program-after-controversy/155256/">according to a statement</a> by the US Department of Defense. Their clarification comes amid fears about a new advanced targeting system, known as <a href="https://www.fbo.gov/index.php?s=opportunity&mode=form&id=29a4aed941e7e87b7af89c46b165a091&tab=core&_cview=0">ATLAS</a>, that will use artificial intelligence in combat vehicles to target and execute threats. While the public may feel uneasy about <a href="https://www.bbc.co.uk/news/technology-47524768">so-called “killer robots”</a>, the concept is nothing new – <a href="https://www.wired.com/2007/08/httpwwwnational/">machine-gun wielding “SWORDS” robots</a> were deployed in Iraq as early as 2007.</p>
<p>Our relationship with military robots goes back even further than that. This is because when people say “robot”, they can mean any technology with some form of “autonomous” element that allows it to perform a task without the need for direct human intervention.</p>
<p>These technologies have existed for a very long time. During World War II, the <a href="https://en.wikipedia.org/wiki/Proximity_fuze">proximity fuse</a> was developed to explode artillery shells at a predetermined distance from their target. This made the shells far more effective than they would otherwise have been by augmenting human decision making and, in some cases, taking the human out of the loop completely.</p>
<p>So the question is not so much whether we should use autonomous weapon systems in battle – we already use them, and they take many forms. Rather, we should focus on how we use them, why we use them, and what form – if any – human intervention should take.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=371&fit=crop&dpr=1 600w, https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=371&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=371&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=466&fit=crop&dpr=1 754w, https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=466&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/266071/original/file-20190327-139364-1n4guap.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=466&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Autonomous targeting systems originated with innovations in anti-aircraft weaponry during World War II.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/antiaircraft-cannon-military-silhouettes-fighting-scene-1308991861">Zef Art/Shutterstock</a></span>
</figcaption>
</figure>
<h2>The birth of cybernetics</h2>
<p>My research explores the philosophy of human-machine relations, with a particular focus on military ethics, and the way we distinguish between humans and machines. During World War II, mathematician Norbert Wiener laid the groundwork of <a href="https://www.pangaro.com/definition-cybernetics.html">cybernetics</a> – the study of the interface between humans, animals and machines – in his work on the control of anti-aircraft fire. By studying the deviations between an aircraft’s predicted motion, and its actual motion, Wiener and his colleague Julian Bigelow came up with the concept of the “feedback loop”, where deviations could be fed back into the system in order to correct further predictions.</p>
<p>Wiener’s theory therefore went far beyond mere augmentation, for cybernetic technology could be used to pre-empt human decisions – removing the fallible human from the loop, in order to make better, quicker decisions and make weapons systems more effective.</p>
<p>In the years since World War II, the computer has emerged to sit alongside cybernetic theory to form a central pillar of military thinking, from the laser-guided “smart bombs” of the Vietnam era to cruise missiles and Reaper drones.</p>
<p>It’s no longer enough to merely augment the human warrior as it was in the early days. The next phase is to remove the human completely – “maximising” military outcomes while minimising the political cost associated with the loss of allied lives. This has led to the <a href="https://www.nytimes.com/roomfordebate/2016/01/12/reflecting-on-obamas-presidency/obamas-embrace-of-drone-strikes-will-be-a-lasting-legacy">widespread use of military drones</a> by the US and its allies. While these missions are highly controversial, in political terms they have proved to be preferable by far to the public outcry caused by military deaths.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/266084/original/file-20190327-139380-ovqdmv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A modern military drone.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/combat-drone-fly-blue-sky-above-1302944902">Alex LMX/Shutterstock</a></span>
</figcaption>
</figure>
<h2>The human machine</h2>
<p>One of the most contentious issues relating to drone warfare is the role of the drone pilot or “operator”. Like all personnel, these operators are bound by their employers to “do a good job”. However, the terms of success are far from clear. As philosopher and cultural critic Laurie Calhoun observes:</p>
<blockquote>
<p>The business of UCAV [drone] operators is to kill.</p>
</blockquote>
<p>In this way, their task is not so much to make a human decision, but rather to do the job that they are employed to do. If the computer tells them to kill, is there really any reason why they shouldn’t?</p>
<p>A similar argument can be made with respect to the modern soldier. From GPS navigation to video uplinks, soldiers carry numerous devices that tie them into a vast network that monitors and controls them at every turn.</p>
<p>This leads to an ethical conundrum. If the purpose of the soldier is to follow orders to the letter – with cameras used to ensure compliance – then why do we bother with human soldiers at all? After all, machines are far more efficient than human beings and don’t suffer from fatigue and stress in the same way as a human does. If soldiers are expected to behave in a programmatic, robotic fashion anyway, then what’s the point in shedding unnecessary allied blood?</p>
<p>The answer, here, is that the human serves as an alibi or form of “ethical cover” for what is in reality, an almost wholly mechanical, robotic act. Just as the drone operator’s job is to oversee the computer-controlled drone, so the human’s role in the Department of Defense’s new ATLAS system is merely to act as ethical cover in case things go wrong.</p>
<p>While Predator and Reaper drones may stand at the forefront of the public imagination about military autonomy and “killer robots”, these innovations are in themselves nothing new. They are merely the latest in a long line of developments that go back many decades.</p>
<p>While it may comfort some readers to imagine that machine autonomy will always be subordinate to human decision making, this really does miss the point. Autonomous systems have long been embedded in the military and we should prepare ourselves for the consequences.</p><img src="https://counter.theconversation.com/content/113941/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mike Ryder does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Science fiction has made us vigilant of ‘killer robots’ in our midst, but they’re far closer than many of us realise.
Mike Ryder, Associate Lecturer in Philosophy, Lancaster University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/110078
2019-03-13T10:40:42Z
2019-03-13T10:40:42Z
Robots guarded Buddha’s relics in a legend of ancient India
<figure><img src="https://images.theconversation.com/files/263198/original/file-20190311-86707-1cbiw3w.jpg?ixlib=rb-1.1.0&rect=44%2C31%2C670%2C489&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Two small figures guard the table holding the Buddha's relics. Are they spearmen, or robots?</span> <span class="attribution"><a class="source" href="https://www.britishmuseum.org/research/collection_online/collection_object_details.aspx?assetId=35705001&objectId=223786&partId=1">British Museum</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA</a></span></figcaption></figure><p>As early as Homer, more than 2,500 years ago, Greek mythology explored the idea of automatons and self-moving devices. By the third century B.C., engineers in Hellenistic Alexandria, in Egypt, were <a href="https://www.britannica.com/biography/Ctesibius-of-Alexandria;%20https://www.britannica.com/biography/Heron-of-Alexandria">building real mechanical robots</a> and machines. And such science fictions and <a href="https://www.realmofhistory.com/2016/06/18/6-automaton-conceptions-history/">historical technologies</a> were not unique to Greco-Roman culture. </p>
<p>In my recent book “<a href="https://press.princeton.edu/titles/14162.html">Gods and Robots</a>,” I explain that many ancient societies imagined and constructed automatons. Chinese chronicles tell of emperors fooled by realistic androids and describe artificial servants crafted in the second century by the female inventor <a href="http://wolfberrystudio.blogspot.com/2010/11/zhuge-liang.html">Huang Yueying</a>. Techno-marvels, such as flying war chariots and animated beings, also appear in Hindu epics. One of the most intriguing stories from India tells how <a href="https://press.princeton.edu/titles/7882.html">robots once guarded Buddha’s relics</a>. As fanciful as it might sound to modern ears, this tale has a strong basis in links between ancient Greece and ancient India.</p>
<p>The story is set in the time of kings Ajatasatru and Asoka. Ajatasatru, who reigned from 492 to 460 B.C., was recognized for commissioning new military inventions, such as powerful catapults and a <a href="https://www.thehindu.com/thehindu/fr/2005/06/24/stories/2005062403600300.htm">mechanized war chariot</a> with whirling blades. When Buddha died, Ajatasatru was entrusted with defending his precious remains. The king hid them in an underground chamber near his capital, Pataliputta (now Patna) in northeastern India. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/263490/original/file-20190312-86699-1k2ymig.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/263490/original/file-20190312-86699-1k2ymig.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/263490/original/file-20190312-86699-1k2ymig.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=417&fit=crop&dpr=1 600w, https://images.theconversation.com/files/263490/original/file-20190312-86699-1k2ymig.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=417&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/263490/original/file-20190312-86699-1k2ymig.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=417&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/263490/original/file-20190312-86699-1k2ymig.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=524&fit=crop&dpr=1 754w, https://images.theconversation.com/files/263490/original/file-20190312-86699-1k2ymig.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=524&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/263490/original/file-20190312-86699-1k2ymig.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=524&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A sculpture depicting the distribution of the Buddha’s relics.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:The_Distribution_of_the_Buddha%27s_Relics_LACMA_M.84.151.jpg">Los Angeles County Museum of Art/Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>Traditionally, statues of giant warriors stood on guard near treasures. But in the legend, Ajatasatru’s guards were extraordinary: They were robots. In India, automatons or mechanical beings that could move on their own were called “<a href="https://doi.org/10.1086/685573">bhuta vahana yanta</a>,” or “spirit movement machines” in Pali and Sanskrit. According to the story, it was foretold that Ajatasatru’s robots would remain on duty until a future king would distribute Buddha’s relics throughout the realm.</p>
<h2>Ancient robots and automatons</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/262949/original/file-20190308-155526-gq0du2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/262949/original/file-20190308-155526-gq0du2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/262949/original/file-20190308-155526-gq0du2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=867&fit=crop&dpr=1 600w, https://images.theconversation.com/files/262949/original/file-20190308-155526-gq0du2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=867&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/262949/original/file-20190308-155526-gq0du2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=867&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/262949/original/file-20190308-155526-gq0du2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1089&fit=crop&dpr=1 754w, https://images.theconversation.com/files/262949/original/file-20190308-155526-gq0du2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1089&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/262949/original/file-20190308-155526-gq0du2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1089&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A statue of Visvakarman, the engineer of the universe.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Bishowkarma_Statue.jpg">Suraj Belbase/Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p><a href="https://www.worldcat.org/title/pali-literature-including-the-canonical-literature-in-prakrit-and-sanskrit-of-all-the-hinayana-schools-of-buddhism/oclc/239747408">Hindu and Buddhist texts</a> describe the automaton warriors whirling like the wind, slashing intruders with swords, recalling Ajatasatru’s war chariots with spinning blades. In some versions the robots are driven by a water wheel or made by Visvakarman, the Hindu engineer god. But the most striking version came by a tangled route to the “<a href="https://www.worldcat.org/title/lokapannatti-et-les-idees-cosmologiques-du-boudhisme-ancien-2/oclc/490480618">Lokapannatti</a>” of Burma – Pali translations of older, lost Sanskrit texts, only known from Chinese translations, each drawing on earlier oral traditions. </p>
<p>In this tale, many “yantakara,” robot makers, lived in the Western land of the “Yavanas,” Greek-speakers, in “Roma-visaya,” the Indian name for the Greco-Roman culture of the Mediterranean world. The Yavanas’ secret technology of robots was closely guarded. The robots of Roma-visaya carried out trade and farming and captured and executed criminals. </p>
<p>Robot makers were forbidden to leave or reveal their secrets – if they did, <a href="https://www.academia.edu/12083581/_Alien_Intellect_and_the_Roboticization_of_the_Scientist._Camera_Obscura._Vol_14_1997_129-160">robotic assassins</a> pursued and killed them. Rumors of the fabulous robots reached India, inspiring a young artisan of Pataliputta, Ajatasatru’s capital, who wished to learn how to make automatons.</p>
<p>In the legend, the young man of Pataliputta finds himself reincarnated in the heart of Roma-visaya. He marries the daughter of the master robot maker and learns his craft. One day he steals plans for making robots, and hatches a plot to get them back to India.</p>
<p>Certain of being slain by killer robots before he could make the trip himself, he slits open his thigh, inserts the drawings under his skin and sews himself back up. Then he tells his son to make sure his body makes it back to Pataliputta, and starts the journey. He’s caught and killed, but his son recovers his body and brings it to Pataliputta.</p>
<p>Once back in India, <a href="http://www.scificatholic.com/2010/07/robots-of-myth-and-legend-saint-albert.html">the son retrieves the plans</a> from his father’s body, and follows their instructions to build the automated soldiers for King Ajatasatru to protect Buddha’s relics in the underground chamber. Well hidden and expertly guarded, the relics – and robots – fell into obscurity.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/263212/original/file-20190311-86693-uy72bm.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/263212/original/file-20190311-86693-uy72bm.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/263212/original/file-20190311-86693-uy72bm.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=523&fit=crop&dpr=1 600w, https://images.theconversation.com/files/263212/original/file-20190311-86693-uy72bm.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=523&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/263212/original/file-20190311-86693-uy72bm.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=523&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/263212/original/file-20190311-86693-uy72bm.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=658&fit=crop&dpr=1 754w, https://images.theconversation.com/files/263212/original/file-20190311-86693-uy72bm.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=658&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/263212/original/file-20190311-86693-uy72bm.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=658&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The sprawling Maurya Empire in about 250 B.C.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Maurya_Empire,_c.250_BCE.png">Avantiputra7/Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Two centuries after Ajatasatru, Asoka ruled the powerful Mauryan Empire in Pataliputta, 273-232 B.C. Asoka constructed many <a href="https://www.britannica.com/biography/Ashoka">stupas to enshrine Buddha’s relics</a> across his vast kingdom. According to the legend, Asoka had heard the legend of the hidden relics and searched until he discovered the underground chamber guarded by the fierce android warriors. Violent battles raged between <a href="https://www.jstor.org/stable/25208320">Asoka and the robots</a>. </p>
<p>In one version, the god <a href="https://www.britannica.com/topic/Vishvakarman">Visvakarman</a> helped Asoka to defeat them by shooting arrows into the bolts that held the spinning constructions together; in another tale, the old engineer’s son explained how to disable and control the robots. At any rate, Asoka ended up commanding the army of automatons himself.</p>
<h2>Exchange between East and West</h2>
<p>Is this legend simply fantasy? Or could the tale have coalesced around early cultural exchanges between East and West? The story clearly connects the mechanical beings defending Buddha’s relics to automatons of Roma-visaya, the Greek-influenced West. How ancient is the tale? <a href="http://doi.org/10.1086/685573">Most scholars assume</a> it arose in medieval Islamic and European times.</p>
<p>But I think the story could be much older. The historical setting points to technological exchange between Mauryan and Hellenistic cultures. <a href="https://www.ancient.eu/article/208/cultural-links-between-india--the-greco-roman-worl/">Contact between India and Greece</a> began in the fifth century B.C., a time when Ajatasatru’s engineers created novel war machines. Greco-Buddhist cultural exchange intensified after Alexander the Great’s <a href="https://www.britannica.com/biography/Alexander-the-Great">campaigns in northern India</a>. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/262950/original/file-20190308-155499-10edlbf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/262950/original/file-20190308-155499-10edlbf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/262950/original/file-20190308-155499-10edlbf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=630&fit=crop&dpr=1 600w, https://images.theconversation.com/files/262950/original/file-20190308-155499-10edlbf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=630&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/262950/original/file-20190308-155499-10edlbf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=630&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/262950/original/file-20190308-155499-10edlbf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=792&fit=crop&dpr=1 754w, https://images.theconversation.com/files/262950/original/file-20190308-155499-10edlbf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=792&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/262950/original/file-20190308-155499-10edlbf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=792&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Inscriptions in Greek and Aramaic on a monument originally erected by King Asoka at Kandahar, in what is today Afghanistan.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:AsokaKandahar.jpg">World Imaging/Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>In 300 B.C., two Greek ambassadors, Megasthenes and Deimachus, resided in Pataliputta, which <a href="https://www.worldcat.org/title/chandragupta-maurya-and-his-times/oclc/426322281">boasted Greek-influenced art and architecture</a> and was the home of the legendary artisan who obtained plans for robots in Roma-visaya. Grand pillars erected by Asoka are <a href="https://madrascourier.com/insight/how-persian-greek-art-influenced-mauryan-architecture/">inscribed in ancient Greek</a> and name Hellenistic kings, demonstrating Asoka’s relationship with the West. Historians know that Asoka corresponded with Hellenistic rulers, <a href="https://www.cs.colostate.edu/%7Emalaiya/ashoka.html">including Ptolemy II Philadelphus</a> in Alexandria, whose <a href="https://sourcebooks.fordham.edu/ancient/285ptolemyII.asp">spectacular procession in 279 B.C.</a> famously displayed complex animated statues and automated devices.</p>
<p>Historians report that Asoka sent envoys to Alexandria, and <a href="https://www.worldcat.org/title/chandragupta-maurya-and-his-times/oclc/426322281">Ptolemy II sent ambassadors to Asoka</a> in Pataliputta. It was customary for diplomats to present splendid gifts to show off cultural achievements. Did they bring plans or miniature models of automatons and other mechanical devices?</p>
<p>I cannot hope to pinpoint the original date of the legend, but it is plausible that the idea of robots guarding Buddha’s relics melds both real and imagined engineering feats from the time of Ajatasatru and Asoka. This striking legend is proof that the concepts of building automatons were widespread in antiquity and reveals the universal and timeless link between imagination and science.</p>
<p>
<section class="inline-content">
<img src="https://images.theconversation.com/files/263202/original/file-20190311-86690-1as1aac.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=128&h=128&fit=crop&dpr=1">
<div>
<header>Adrienne Mayor is the author of:</header>
<p><a href="https://press.princeton.edu/titles/14162.html">Gods and Robots: Myths, Machines, and Ancient Dreams of Technology</a></p>
<footer>Princeton University Press provides funding as a member of The Conversation US.</footer>
</div>
</section>
</p><img src="https://counter.theconversation.com/content/110078/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Princeton University Press provides funding as a member of The Conversation US.</span></em></p>
Stories passed down from the ancient world tell of self-powered machines able to move on their own – robots – playing key roles in historic moments.
Adrienne Mayor, Research Scholar, Classics and History and Philosophy of Science, Stanford University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/111584
2019-02-22T11:40:49Z
2019-02-22T11:40:49Z
Robots star in ads, but mislead viewers about technology
<figure><img src="https://images.theconversation.com/files/259323/original/file-20190215-56243-1mhik5c.jpg?ixlib=rb-1.1.0&rect=55%2C1%2C1122%2C716&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Robots can't really eat hot dogs.</span> <span class="attribution"><a class="source" href="https://www.youtube.com/watch?v=_rnrEQBieIQ">SimpliSafe/YouTube.com</a></span></figcaption></figure><p>Nowhere is the advance of technology more evident than in the rise of robots and artificial intelligence. From smart devices to self-checkout lanes to Netflix recommendations, robots (the hardware) and AI (the software) are everywhere inside the technology of modern society. They’re increasingly common in ads, too: During the 2019 Super Bowl alone, seven ads aired featuring either robots or AI.</p>
<p>Since I began <a href="http://www.joellerenstrom.com/publications/">studying human-robot interactions</a> almost a decade ago, I’ve observed that in most ads, robots typically fall into one of three general categories: scary, sad or stupid. All three perpetuate common misconceptions about technologies that are already beginning to play a pivotal role in people’s lives.</p>
<h2>The fear factor</h2>
<p>“Scary robot” ads are inevitable, given the <a href="https://www.wired.com/insights/2014/08/are-killer-robots-on-the-rise/">popularity of the sinister robot trope</a>. Advertisers, like Hollywood, embrace scary robot narratives because they’re more dramatic than ones in which robots and humans get along.</p>
<p>“<a href="https://www.youtube.com/watch?v=_rnrEQBieIQ">Fear is Everywhere</a>,” a paranoia-inducing 2019 commercial, advertises SimpliSafe home security systems, which use some of the same monitoring technology the ad demonizes. Rather than reminding viewers of their concerns about burglars or basement flooding, the ad highlights robots and AI as the omnipresent danger. A woman in an electronics store asks her friend <a href="https://theconversation.com/do-i-want-an-always-on-digital-assistant-listening-in-all-the-time-92571">if he’s listening</a>, and a creepy computer voice issues forth from a speaker: “Always, Denise.”</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/_rnrEQBieIQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">SimpliSafe’s ‘Fear is Everywhere’ ad.</span></figcaption>
</figure>
<p>That same ad also highlights a second major type of fear – that robots will replace humans. A man watching a sporting event tells his friends, “in five years, robots will be able to do your job, and your job and your job,” while a robot sitting in the stands listens menacingly, as if affirming the assertion. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/j4IFNKYmLa8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Halo Top suggests humans’ only need is ice cream.</span></figcaption>
</figure>
<p>Then, of course, there’s the third trope, of the evil robot intent on harming people. A <a href="https://www.youtube.com/watch?v=j4IFNKYmLa8">2017 Halo Top ice cream</a> ad, for example, functions as a 90-second horror movie, in which a robot force-feeds a woman ice cream, and then casually mentions that everyone she knows is dead.</p>
<p>There are real <a href="https://theconversation.com/what-the-industrial-revolution-really-tells-us-about-the-future-of-automation-and-work-82051">threats to humans from robots and AI</a>. Automation may <a href="https://futurism.com/new-chart-proves-automation-serious-threat">eliminate millions of jobs</a> – and it <a href="https://www.forbes.com/sites/forbestechcouncil/2018/05/01/ai-doesnt-eliminate-jobs-it-creates-them/">might create many others</a> that don’t yet exist. Most likely, both will happen, as has happened throughout history: Elevator operators disappeared and social-media manager positions were created. The threat revolves around who will and who won’t be able to adjust or receive training to get the new jobs.</p>
<p>But the world is a long way off from robots that portray a version of the “<a href="https://slate.com/technology/2013/07/pacific-rim-s-robots-go-beyond-the-frankenstein-complex.html">Frankenstein Complex</a>,” Isaac Asimov’s phrase for the human fear that <a href="https://www.e-reading.club/chapter.php/81822/47/Azimov_-_Robot_Visions.html">poorly designed mechanical creations</a> might turn against humanity. <a href="https://slate.com/technology/2015/04/ex-machina-can-robots-artificial-intelligence-have-emotions.html">Robots have no intentions</a> – only instructions. They can act as though they have feelings, but experience no actual emotion. No one knows if robot emotion or sentience are even possible. </p>
<p>Ads that instill fear of technology in humans can present an unrealistic and unhelpful mindset for <a href="https://theconversation.com/5-ways-to-help-robots-work-together-with-people-101419">adapting to the increasing presence</a> of this technology in our lives – whether in <a href="https://www.aclu.org/issues/privacy-technology/surveillance-technologies/ai-and-criminal-justice-devil-data">criminal justice</a>, <a href="https://www.thedailybeast.com/robot-nurses-will-make-shortages-obsolete">health care</a> or <a href="https://theconversation.com/why-big-data-analysis-of-police-activity-is-inherently-biased-72640">other areas</a>. Fear can also distract people from properly understanding and planning for ways in which <a href="https://www.weforum.org/agenda/2018/06/the-3-skill-sets-workers-need-to-develop-between-now-and-2030/">humans can continue to offer meaningful skills</a> and insights beyond the abilities of any machine.</p>
<h2>Doom and gloom</h2>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/tDakI68u2xE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Pringles are for everyone – sort of.</span></figcaption>
</figure>
<p>“Sad robot” ads combat people’s fears about robots while simultaneously eliciting sympathy for them. In a 2019 <a href="https://www.youtube.com/watch?v=tDakI68u2xE">Pringles ad</a>, a smart device bemoans its lack of hands to stack chips or mouth to eat them. The robot’s physical limitations reassure viewers of human superiority, and yet the robot is advanced enough to have genuine feelings of sadness.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/JIRX3yWhgZI?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Could a child do your taxes?</span></figcaption>
</figure>
<p><a href="https://www.youtube.com/watch?v=JIRX3yWhgZI">Turbo Tax’s RoboChild</a> perpetuated the myth of robot intelligence in two appearances during the 2019 Super Bowl. RoboChild, which looks like young Haley Joel Osment’s face stuck on a small robot body, wants to be an accountant, but encounters constant reminders that it’s in a human world. A person tells RoboChild it isn’t emotionally complex enough for the job, correctly distinguishing between the human and robot abilities to feel emotion – while sparking viewers’ sympathy for the robot. </p>
<p>However, emotion isn’t necessary to fulfill most accounting functions: Artificial intelligence already performs <a href="https://www.forbes.com/sites/bernardmarr/2018/06/01/the-digital-transformation-of-accounting-and-finance-artificial-intelligence-robots-and-chatbots/">a number of financial tasks</a>, many of which require human interaction. </p>
<h2>Falling to pieces</h2>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/Zkcutf36SkU?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Robots may not make great insurance agents.</span></figcaption>
</figure>
<p>The third category of advertising robots doesn’t evoke fear or sympathy, but rather ridicule. A <a href="https://www.youtube.com/watch?v=Zkcutf36SkU">2018 State Farm ad</a>, for instance, pokes fun at a rival agency that has begun using cheap robot agents instead of human ones. The employee robot is a mess, spurting both hydraulic fluid and gibberish. In “stupid robot” ads, robots have cognitive constraints, sometimes in addition to physical ones. </p>
<p>These ads are at least somewhat realistic, as robots and AI have fundamental limitations – even the system that can <a href="https://www.thedailybeast.com/dont-pass-go-ai-will-beat-you-at-pretty-much-everything">beat an international Go champion</a> isn’t much good <a href="https://www.wired.com/story/how-to-teach-artificial-intelligence-common-sense/">at anything else</a>. Even so, portraying robots as a collection of laughable, malfunctioning parts undermines the seriousness of their implications. Humans who are laughing at dumb machines may not think clearly or prepare actively for a future in which even limited robots and AI are key players.</p>
<iframe src="https://players.brightcove.net/377748811/BkeObTWBe_default/index.html?videoId=5996370407001" allowfullscreen="" frameborder="0" width="100%" height="400"></iframe>
<p>Amazon’s Super Bowl ad featuring <a href="https://adage.com/article/special-report-super-bowl/watch-alexa-failures-amazon-s-super-bowl-commercial/316435/">Alexa fails</a> initially seemed like a collection of “stupid robot” highlights. A collar that allows a dog to order an entire truckload of food reminds viewers of Alexas that interpreted TV news or casual conversations as <a href="https://qz.com/880541/amazons-amzn-alexa-accidentally-ordered-a-ton-of-dollhouses-across-san-diego/">directives to buy products</a>. </p>
<p>It rightly makes the point that no product is perfect – but it subtly demonstrates the power of Amazon’s technologies, which in the ad shut down an entire continental power grid by accident. The technology itself is portrayed as dysfunctional – and something over which we can all have a laugh. However, the failures illustrate that the flaws lie in human efforts of concept, design or programming. Laughing at the machines can distract people from that deeper insight, or from considering <a href="https://theconversation.com/we-need-to-know-the-algorithms-the-government-uses-to-make-important-decisions-about-us-57869">who should be responsible</a> when automation-enabled disaster strikes.</p>
<p>Commercials aren’t likely to encourage viewers to seek out legitimate information about new technologies. Their main job is to sell a product or service, not contribute to an informed society. But they need not perpetuate generalized and unrealistic fears. The more misdirection people absorb about robots and AI, the less capable they will be of understanding and managing the real implications of technological advances.</p><img src="https://counter.theconversation.com/content/111584/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joelle Renstrom does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
In ads, robots typically are scary, sad or stupid. Real-life robots and artificial intelligence systems are none of those.
Joelle Renstrom, Lecturer of Rhetoric and freelance science writer, Boston University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/107615
2019-01-24T18:41:56Z
2019-01-24T18:41:56Z
To protect us from the risks of advanced artificial intelligence, we need to act now
<figure><img src="https://images.theconversation.com/files/249203/original/file-20181206-128208-g5doqn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What would Artificial General Intelligence make of the human world?</span> <span class="attribution"><span class="source">Shutterstock/Nathapol Kongseang</span></span></figcaption></figure><p>Artificial intelligence can play chess, drive a car and diagnose medical issues. Examples include Google DeepMind’s <a href="https://deepmind.com/research/alphago/">AlphaGo</a>, Tesla’s <a href="https://www.tesla.com/en_AU/autopilot">self-driving vehicles</a>, and <a href="https://www.ibm.com/watson/index.html">IBM’s Watson</a>. </p>
<p>This type of artificial intelligence is referred to as Artificial Narrow Intelligence (ANI) – non-human systems that can perform a specific task. We encounter this type on a <a href="https://emerj.com/ai-sector-overviews/everyday-examples-of-ai/">daily basis</a>, and its use is growing rapidly.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/when-ai-meets-your-shopping-experience-it-knows-what-you-buy-and-what-you-ought-to-buy-101737">When AI meets your shopping experience it knows what you buy – and what you ought to buy</a>
</strong>
</em>
</p>
<hr>
<p>But while many impressive capabilities have been demonstrated, we’re also beginning to <a href="https://www.theverge.com/2018/7/26/17619382/ibms-watson-cancer-ai-healthcare-science">see problems</a>. The worst case involved a <a href="https://www.ntsb.gov/news/press-releases/Pages/NR20180524.aspx">self-driving test car that hit a pedestrian</a> in March. The pedestrian died and the incident is still under <a href="https://www.ntsb.gov/investigations/Pages/HWY18FH010.aspx">investigation</a>. </p>
<h2>The next generation of AI</h2>
<p>With the next generation of AI the stakes will almost certainly be much higher. </p>
<p>Artificial General Intelligence (<a href="https://www.zdnet.com/article/what-is-artificial-general-intelligence/">AGI</a>) will have advanced computational powers and human level intelligence. AGI systems will be able to learn, solve problems, adapt and self-improve. They will even do tasks beyond those they were designed for. </p>
<p>Importantly, their rate of improvement could be exponential as they become far more advanced than their human creators. The introduction of AGI could quickly bring about Artificial Super Intelligence (<a href="http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials">ASI</a>).</p>
<p>While fully functioning AGI systems do not yet exist, it has been estimated that they will be with us anywhere between <a href="https://arxiv.org/abs/1805.01109">2029 and the end of the century</a>. </p>
<p>What appears almost certain is that they will arrive <a href="https://openai.com/">eventually</a>. When they do, there is a great and natural concern that we won’t be able to control them.</p>
<h2>The risks associated with AGI</h2>
<p>There is no doubt that AGI systems could transform humanity. Some of the more powerful applications include curing disease, solving complex global challenges such as climate change and food security, and initiating a worldwide technology boom.</p>
<p>But a failure to implement appropriate controls could lead to catastrophic consequences. </p>
<p>Despite what we see in <a href="https://blog.adext.com/en/artificial-intelligence-ai-movies">Hollywood movies</a>, existential threats are not likely to involve killer robots. The problem will not be one of malevolence, but rather one of intelligence, writes MIT professor Max Tegmark in his 2017 book <a href="https://www.goodreads.com/book/show/34272565-life-3-0">Life 3.0: Being Human in the Age of Artificial Intelligence</a>.</p>
<p>It is here that the science of human-machine systems – known as <a href="https://www.iea.cc/whats/index.html">Human Factors and Ergonomics</a> – will come to the fore. Risks will emerge from the fact that super-intelligent systems will identify more efficient ways of doing things, concoct their own strategies for achieving goals, and even <a href="https://futureoflife.org/background/aimyths/">develop goals of their own</a>.</p>
<p>Imagine these examples: </p>
<ul>
<li><p>an AGI system tasked with preventing HIV decides to eradicate the problem by killing everybody who carries the disease, or one tasked with curing cancer decides to kill everybody who has any genetic predisposition for it</p></li>
<li><p>an autonomous AGI military drone decides the only way to guarantee an enemy target is destroyed is to wipe out an entire community</p></li>
<li><p>an environmentally protective AGI decides the only way to slow or reverse climate change is to remove technologies and humans that induce it. </p></li>
</ul>
<p>These scenarios raise the spectre of disparate AGI systems battling each other, none of which take human concerns as their central mandate.</p>
<p>Various dystopian futures have been advanced, including those in which humans eventually become obsolete, with the subsequent <a href="https://nickbostrom.com/existential/risks.html">extinction of the human race</a>.</p>
<p>Others have forwarded less extreme but still significant disruption, including malicious use of AGI for <a href="https://arxiv.org/abs/1802.07228">terrorist and cyber-attacks</a>, the <a href="https://www.nbcnews.com/think/opinion/will-robots-take-your-job-humans-ignore-coming-ai-revolution-ncna845366">removal of the need for human work</a>, and <a href="https://www.abc.net.au/news/2018-09-18/china-social-credit-a-model-citizen-in-a-digital-dictatorship/10200278">mass surveillance</a>, to name only a few.</p>
<p>So there is a need for human-centred investigations into the safest ways to design and manage AGI to minimise risks and maximise benefits.</p>
<h2>How to control AGI</h2>
<p>Controlling AGI is not as straightforward as simply applying the same kinds of controls that tend to keep humans in check. </p>
<p>Many controls on human behaviour rely on our consciousness, our emotions, and the application of our moral values. <a href="https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742">AGIs won’t need any of these attributes to cause us harm</a>. Current forms of control are not enough. </p>
<p>Arguably, there are three sets of controls that require development and testing immediately:</p>
<ol>
<li><p>the controls required to ensure AGI system designers and developers create safe AGI systems</p></li>
<li><p>the controls that need to be built into the AGIs themselves, such as “common sense”, morals, operating procedures, decision-rules, and so on</p></li>
<li><p>the controls that need to be added to the broader systems in which AGI will operate, such as regulation, codes of practice, standard operating procedures, monitoring systems, and infrastructure. </p></li>
</ol>
<p>Human Factors and Ergonomics offers methods that can be used to identify, design and test such controls well before AGI systems arrive.</p>
<p>For example, it’s possible to model the controls that exist in a particular system, to model the likely behaviour of AGI systems within this control structure, and identify safety risks.</p>
<p>This will allow us to identify where new controls are required, design them, and then remodel to see if the risks are removed as a result. </p>
<p>In addition, our models of cognition and decision making can be used to ensure AGIs behave appropriately and have humanistic values.</p>
<h2>Act now, not later</h2>
<p>This kind of research is <a href="https://futureoflife.org/first-ai-grant-recipients/">in progress</a>, but there is not nearly enough of it and not enough disciplines are involved.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-r2d2-could-be-your-childs-teacher-sooner-than-you-think-103284">Why R2D2 could be your child's teacher sooner than you think</a>
</strong>
</em>
</p>
<hr>
<p>Even the high-profile tech entrepreneur Elon Musk has warned of the “<a href="https://youtu.be/B-Osn1gMNtw?t=210">existential crisis</a>” humanity faces from advanced AI and has spoken about the <a href="https://www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo">need to regulate AI before it’s too late</a>.</p>
<p>The next decade or so represents a critical period. There is an opportunity to create safe and efficient AGI systems that can have far reaching benefits to society and humanity. </p>
<p>At the same time, a business-as-usual approach in which we play catch-up with rapid technological advances could contribute to the extinction of the human race. The ball is in our court, but it won’t be for much longer.</p><img src="https://counter.theconversation.com/content/107615/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Paul Salmon receives funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Peter Hancock and Tony Carden do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
We’re on the road to developing artificial intelligence systems that will be able to do tasks beyond those they were designed for. But will we be able to control them?
Paul Salmon, Professor of Human Factors, University of the Sunshine Coast
Peter Hancock, Professor of Psychology, Civil and Environmental Engineering, and Industrial Engineering and Management Systems, University of Central Florida
Tony Carden, Researcher, University of the Sunshine Coast
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/108154
2018-12-05T13:08:16Z
2018-12-05T13:08:16Z
The Montréal Declaration: Why we must develop AI responsibly
<figure><img src="https://images.theconversation.com/files/248941/original/file-20181205-186073-aqlflq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">As AI is deployed in society, there is an impact that can be positive or negative. The future is in our hands</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>I have been doing research on intelligence for 30 years. Like most of my colleagues, I did not get involved in the field with the aim of producing technological objects, but because I have an interest in the the abstract nature of the notion of intelligence. I wanted to understand intelligence. That’s what science is: Understanding.</p>
<p>However, when a group of researchers ends up understanding something new, that knowledge can be exploited for beneficial or harmful purposes.</p>
<p>That’s where we are — at a turning point where the science of artificial intelligence is emerging from university laboratories. For the past five or six years, large companies such as <a href="https://ici.radio-canada.ca/nouvelle/1125202/laboratoire-intelligence-artificielle-%20facebook-montreal-anniversaire-expansion-montreal">Facebook </a> and <a href="https://ai.google">Google</a> have become so interested in the field that they are putting hundreds of millions of dollars on the table to buy AI firms and then develop this expertise internally.</p>
<p>The progression in AI has since been exponential. Businesses are very interested in using this knowledge to develop new markets and products and to improve their efficiency.</p>
<p>So, as AI spreads in society, there is an impact. It’s up to us to choose how things play out. The future is in our hands.</p>
<h2>Killer robots, job losses</h2>
<p>From the get-go, the issue that has concerned me is that of lethal autonomous weapons, also known as <a href="https://ici.radio-canada.ca/nouvelle/1136739/yoshua-bengio-robots-tueurs-intelligence-artificielle-militaire-collaboration-internationale-recherche-apprentissage-profond">killer robots</a>.</p>
<p>While there is a moral question because machines have no understanding of the human, psychological and moral context, there is also a security question because these weapons could destabilize the world order.</p>
<p>Another issue that quickly surfaced is that of job losses caused by automation. We asked the question: Why? Who are we trying to bring relief to and from what? The trucker isn’t happy on the road? He should be replaced by… nobody?</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/248572/original/file-20181203-194935-844hv6.jpg?ixlib=rb-1.1.0&rect=0%2C10%2C3471%2C2538&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/248572/original/file-20181203-194935-844hv6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/248572/original/file-20181203-194935-844hv6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/248572/original/file-20181203-194935-844hv6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/248572/original/file-20181203-194935-844hv6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/248572/original/file-20181203-194935-844hv6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/248572/original/file-20181203-194935-844hv6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Machines have already replaced humans for many functions. How far will it go?</span>
<span class="attribution"><span class="source">Franck v/Unsplash</span>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>We scientists seemingly can’t do much. Market forces determine which jobs will be eliminated or those where the workload will be lessened, according to the economic efficiency of the automated replacements. But we are also citizens who can participate in a unique way in the social and political debate on these issues precisely because of our expertise.</p>
<p>Computer scientists are concerned with the issue of jobs. That is not because they will suffer personally. In fact, the opposite is true. But they feel they have a responsibility and they don’t want their work to potentially put millions of people on the street.</p>
<h2>Revising the social safety net</h2>
<p>So strong support exists, therefore, among computer scientists — especially those in AI — for a revision of the social safety net to allow for a sort of guaranteed wage, or what I would call a form of guaranteed human dignity.</p>
<p>The objective of technological innovation is to reduce human misery, not increase it.</p>
<p>It is also not meant to increase discrimination and injustice. And yet, AI can contribute to both.</p>
<p>Discrimination is not so much due, as we sometimes hear, to the fact AI was conceived by men because of the alarming lack of women in the technology sector. It is mostly due to AI leading on data that reflects people’s behaviour. And that behaviour is unfortunately biased.</p>
<p>In other words, a system that relies on data that comes from people’s behaviour will have the same biases and discrimination as the people in question. It will not be “politically correct.” It will not act according to the moral notions of society, but rather according to common denominators.</p>
<p>Society is discriminatory and these systems, if we’re not careful, could perpetuate or increase that discrimination.</p>
<p>There could also be what is called a feedback loop. For example, police forces use this kind of system to identify neighbourhoods or areas that are more at-risk. They will send in more officers… who will report more crimes. So the statistics will strengthen the biases of the system.</p>
<p>The good news is that research is currently being done to develop algorithms that will minimize discrimination. Governments, however, will have to bring in rules to force businesses to use these techniques.</p>
<h2>Saving lives</h2>
<p>There is also good news on the horizon. The medical field will be one of those most affected by AI — and it’s not just a matter of saving money.</p>
<p>Doctors are human and therefore make mistakes. So the more we develop systems with more data, fewer mistakes will occur. Such systems are more precise than the best doctors. They are already using these tools so they don’t miss important elements such as cancerous cells that are difficult to detect in a medical image.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/247132/original/file-20181125-149308-11gudd6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/247132/original/file-20181125-149308-11gudd6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=432&fit=crop&dpr=1 600w, https://images.theconversation.com/files/247132/original/file-20181125-149308-11gudd6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=432&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/247132/original/file-20181125-149308-11gudd6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=432&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/247132/original/file-20181125-149308-11gudd6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=543&fit=crop&dpr=1 754w, https://images.theconversation.com/files/247132/original/file-20181125-149308-11gudd6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=543&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/247132/original/file-20181125-149308-11gudd6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=543&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The data provided by artificial intelligence will allow patients’ medical records to be interpreted much more effectively.</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/qwtCeJ5cLYs">Stephen Dawson/Unsplash</a>, <a class="license" href="http://artlibre.org/licence/lal/en">FAL</a></span>
</figcaption>
</figure>
<p>There is also the development of new medications. AI can do a better job of analyzing the vast amount of data (more than what a human would have time to digest) that has been accumulated on drugs and other molecules. We’re not there yet, but the potential is there, as is more efficient analysis of a patient’s medical file.</p>
<p>We are headed toward tools that will allow doctors to make links that otherwise would have been very difficult to make and will enable physicians to suggest treatments that could save lives.</p>
<p>The chances of the medical system being completely transformed within 10 years are very high and, obviously, the importance of this progress for everyone is enormous.</p>
<p>I am not concerned about job losses in the medical sector. We will always need the competence and judgment of health professionals. However, we need to strengthen social norms (laws and regulations) to allow for the protection of privacy (patients’ data should not be used against them) as well as to aggregate that data to enable AI to be used to heal more people and in better ways.</p>
<h2>The solutions are political</h2>
<p>Because of all these issues and others to come, the <a href="https://www.montrealdeclaration-responsibleai.com/">Montréal Declaration for Responsible Development of Artificial Intelligence</a> is important. <a href="https://nouvelles.umontreal.ca/en/article/2018/12/04/developing-ai-in-a-responsible-way/">It was signed Dec. 4</a> at the Society for Arts and Technology in the presence of about 500 people. </p>
<p>It was forged on the basis of vast consensus. We consulted people on the internet and in bookstores and gathered opinion in all kinds of disciplines. Philosophers, sociologists, jurists and AI researchers took part in the process of creation, so all forms of expertise were included.</p>
<p>There were several versions of this declaration. The first draft was at a forum on the <a href="http://www.lecre.umontreal.ca/ai1ec_event/colloque-sur-lintelligence-artificielle/?instance_id=">socially responsible development of AI</a> organized by the Université de Montréal on Nov. 2, 2017.</p>
<p>That was the birthplace of the declaration.</p>
<p>Its goal is to establish a certain number of principles that would form the basis of the adoption of new rules and laws to ensure AI is developed in a socially responsible manner. Current laws are not always well adapted to these new situations.</p>
<p>And that’s where we get to politics.</p>
<h2>The abuse of technology</h2>
<p>Matters related to ethics or abuse of technology ultimately become political and therefore belong in the sphere of collective decisions.</p>
<p>How is society to be organized? That is political.</p>
<p>What is to be done with knowledge? That is political. </p>
<p>I sense a strong willingness on the part of provincial governments as well as the federal government to commit to socially responsible development.</p>
<p>Because Canada is a scientific leader in AI, it was one of the first countries to see all its potential and to develop a national plan. It also has the will to play the role of social leader.</p>
<p>Montréal has been at the forefront of this sense of awareness for the past two years. I also sense the same will in Europe, including France and Germany.</p>
<p>Generally speaking, scientists tend to avoid getting too involved in politics. But when there are issues that concern them and that will have a major impact on society, they must assume their responsibility and become part of the debate.</p>
<p>And in this debate, I have come to realize that society has given me a voice — that governments and the media were interested in what I had to say on these topics because of my role as a pioneer in the scientific development of AI.</p>
<p>So, for me, it is now more than a responsibility. It is my duty. I have no choice.</p><img src="https://counter.theconversation.com/content/108154/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Yoshua Bengio has received funding from Google, Facebook, Microsoft, DeepMind, Imagia, Samsung, IBM, Huawei, Panasonic, Nuance, NSERC, Canada Research Chairs, CFI, FRQNT.</span></em></p>
The Montréal Declaration calls for the responsible development of artificial intelligence. A world expert explains why scientists must choose how their expertise will benefit society.
Yoshua Bengio, Professeur titulaire, Département d'informatique et de recherche opérationnelle, Université de Montréal
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/102637
2018-09-12T02:51:16Z
2018-09-12T02:51:16Z
Why it’s so hard to reach an international agreement on killer robots
<figure><img src="https://images.theconversation.com/files/235713/original/file-20180911-123113-d66rcu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The MK 15 Phalanx close-in weapons system, on the USS Reuben James guided-missile frigate, fires during an exercise. </span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/compacflt/7066133355/">Flickr/US Pacific Fleet</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>For several years, civil society groups have been <a href="https://www.stopkillerrobots.org/">calling for a ban</a> on what they call “killer robots”. Scores of technologists have <a href="https://futureoflife.org/open-letter-autonomous-weapons/">lent their voice</a> to the cause. Some two dozen governments now <a href="https://www.stopkillerrobots.org/wp-content/uploads/2018/04/KRC_CountryViews_13Apr2018.pdf">support a ban</a> and several others would like to see some kind of international regulation. </p>
<p>Yet the latest talks on “lethal autonomous weapons systems” wrapped up last month with no agreement on a ban. The <a href="https://www.unog.ch/80256EE600585943/(httpPages)/7C335E71DFCB29D1C1258243003E8724?OpenDocument">Group of Governmental Experts</a> meeting, convened in Geneva under the auspices of the United Nations Convention on Certain Conventional Weapons, did not even clearly proceed towards one. The outcome was a decision to continue discussions next year. </p>
<p>Those supporting a ban are <a href="https://www.hrw.org/news/2018/09/05/support-grows-killer-robots-ban">not impressed</a>. But the reasons for the failure to reach agreement on the way forward are complex. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lack-of-technical-knowledge-in-leadership-is-a-key-reason-why-so-many-it-projects-fail-101889">Lack of technical knowledge in leadership is a key reason why so many IT projects fail</a>
</strong>
</em>
</p>
<hr>
<h2>What to ban?</h2>
<p>The immediate difficulty concerns articulating what technology is objectionable. The related, deeper question is about whether increased autonomy of weapons is always bad.</p>
<p>Many governments, including <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/2440CD1922B86091C12582720057898F/$file/2018_LAWS6a_Germany.pdf">Germany</a>, <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/072ED40378F79CFBC125827200575723/$file/2018_LAWSGeneralExchange_Spain.pdf">Spain</a> and the <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/050CF806D90934F5C12582E5002EB800/$file/2018_GGE+LAWS_August_Working+Paper_UK.pdf">United Kingdom</a>, have said they do not have, and do not want, weapons wholly uncontrolled by humans. At the same time, militaries already own weapons that, to some degree, function without someone pulling the trigger.</p>
<p>Since the 1970s, navies have used so-called close-in weapon systems (CWIS). Once switched on, these weapons can automatically shoot down incoming rockets and missiles as the warship’s final line of defence. <a href="https://www.raytheon.com/capabilities/products/phalanx">Phalanx</a>, with its distinctively shaped radar dome, is probably the best-known weapon system of this kind.</p>
<p>Armies now deploy land-based variants of CWIS, generally known as C-RAM (short for counter-rocket, artillery and mortar), for the protection of military bases. </p>
<p>Other types of weapons also have autonomous functionality. For example, <a href="https://www.baesystems.com/en-us/product/155-bonus">sensor-fuzed weapons</a>, fired in the general direction of their targets, rely on sensors and preset targeting parameters to launch themselves at individual targets. </p>
<p>None of these weapons has stirred significant controversy. </p>
<h2>The acceptable vs the unacceptable</h2>
<p>What exactly is the dreaded “fully autonomous” weapon system that no-one has much appetite for? Attempts to answer this question over the past few years have not enjoyed success. </p>
<p>The supporters of a ban note – correctly – that the lack of a precise definition has not stopped arms control negotiations before. They point to the <a href="https://www.un.org/disarmament/ccm/">Convention on Cluster Munitions</a>, signed in 2008, as an example.</p>
<p>The notion of a cluster munition – a large bomb that disperses small unguided bomblets – was clear enough from the outset. Yet the precise properties of the banned munition were agreed upon later in the process. </p>
<p>Unfortunately, the comparison between cluster munitions and autonomous weapons does not quite work. Though cluster munitions were a loose category to start, it was clear they could be categorised by technical criteria. </p>
<p>In the end, the Convention on Cluster Munitions <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/E6D340011E720FC9C1257516005818B8/$file/Convention+on+Cluster+Munitions+E.pdf">draws a line</a> between permissible and prohibited munitions by reference to things such as the number, weight and self-destruction capability of submunitions. </p>
<p>With regard to any similar rules on autonomous weapon systems, it is not only unclear where the line should to be drawn between what is and isn’t permissible, it is also unclear what criteria to use for drawing it.</p>
<h2>How much human control?</h2>
<p>One way out of this thicket of definitions is to shift the focus from the weapon itself to the way the human interacts with the weapon. Rather than debate what to ban, governments should agree on the necessary degree of control humans should exercise. <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/3BDD5F681113EECEC12582FE0038B22F/$file/2018_GGE+LAWS_August_Working+paper_Austria_Brazil_Chile.pdf">Austria, Brazil and Chile</a> have suggested starting treaty negotiations precisely along those lines.</p>
<p>This change of perspective may well prove to be helpful. But the key problem is thereby transformed rather than resolved. The question now becomes: what kind of human involvement is needed and when must it occur?</p>
<p>A strict idea of human control would entail a human making a conscious decision about each individual target in real time. This approach would cast a shadow on the existing weapon systems mentioned earlier. </p>
<p>A strict reading of human control might also require the operator to have the ability to abort a weapon until the moment it hits a target. This would raise questions about even the simplest of weapons – rocks, spears, bullets or gravity bombs – which leave human hands at some point. </p>
<p>An alternative understanding of human control would consider the weapon’s broader design, testing, acquisition and deployment processes. It would admit, for example, that a weapon preprogrammed by a human is in fact controlled by a human. But some would consider programming to be a poor and unpalatable substitute for a human acting at the critical time. </p>
<p>In short, the furious agreement about the need to maintain human involvement hides a deep disagreement about what that means. This is not a mere semantic dispute. It is an important and substantive disagreement that defies an easy resolution.</p>
<h2>The benefits of autonomy</h2>
<p>Some governments, such as <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/7C177AE5BC10B588C125825F004B06BE/$file/CCW_GGE.1_2018_WP.4.pdf">the United States</a>, argue that autonomous functions in weapons can yield military and humanitarian benefits. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/three-ways-robots-can-save-lives-in-war-87105">Three ways robots can save lives in war</a>
</strong>
</em>
</p>
<hr>
<p>They suggest, for example, that reducing the manual control that a human has over a weapon, might increase its accuracy. This, in turn, could help avoid unintended harm to civilians.</p>
<p>Others find even the notion of benefits in this context to be too much. During the last Group of Governmental Experts meeting, several Latin American governments, most prominently <a href="http://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2018/gge/reports/CCWR6.11.pdf">Costa Rica and Cuba</a>, opposed any reference to potential benefits. In their view, autonomy in weapon systems only poses risks and challenges, which need to be mitigated through further regulation.</p>
<p>This divide reveals an underlying uncertainty about the aims of international law in armed conflict. For some, desirable outcomes – surgical use of force, reduced collateral damage, and so on – prevail. For others, the instruments of warfare must (sometimes) be restricted no matter the outcomes.</p>
<h2>The next step</h2>
<p>Supporters of the ban <a href="https://www.nytimes.com/aponline/2018/09/03/world/europe/ap-eu-united-nations-killer-robots.html">suggest</a> that a handful of powerful states, particularly the US and Russia, are blocking further negotiations.</p>
<p>This does not seem entirely accurate. Disagreements about the most appropriate way forward are much broader and quite fundamental. </p>
<p>Addressing the challenges of autonomous weapons is therefore not just a matter of getting a few recalcitrant governments to fall in line. Much less is it about verbally abusing them into submission. </p>
<p>If there is to be further regulation, and if that regulation is to be effective, the different viewpoints must be taken seriously – even if one disagrees with them. A quick fix is unlikely and, in the long term, probably counterproductive.</p><img src="https://counter.theconversation.com/content/102637/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rain Liivoja currently holds a Branco Weiss Fellowship, administered by ETH Zurich. He has served as an expert on the Estonian delegation to the Group of Governmental Experts on Lethal Autonomous Weapons Systems. This article reflects his personal views.</span></em></p>
We already have some autonomous weapons – so talk of any ban should focus on where we draw the line on what is acceptable, and what is not. Can we at least agree on that?
Rain Liivoja, Associate Professor, TC Beirne School of Law, The University of Queensland
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/102736
2018-09-06T13:19:17Z
2018-09-06T13:19:17Z
AI has already been weaponised – and it shows why we should ban ‘killer robots’
<figure><img src="https://images.theconversation.com/files/235215/original/file-20180906-190636-aogrro.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/unmanned-air-uav-spy-above-enemy-26952160?src=-ZOKXFCzFXCQZUjYk5R16g-1-16">Oleg Yarko/Shutterstock</a></span></figcaption></figure><p>A dividing line is emerging in the debate over so-called killer robots. Many countries want to see new international law on autonomous weapon systems that can target and kill people without human intervention. But those countries already developing such weapons are instead trying to highlight their supposed benefits.</p>
<p>I witnessed this growing gulf at a recent UN meeting of more than 70 countries <a href="https://www.unog.ch/80256EE600585943/(httpPages)/7C335E71DFCB29D1C1258243003E8724?OpenDocument">in Geneva</a>, where those in favour of autonomous weapons, including the US, Australia and South Korea, were more vocal than ever. At the meeting, <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/D1A2BA4B7B71D29FC12582F6004386EF/$file/2018_GGE+LAWS_August_Working+Paper_US.pdf">the US claimed</a> that such weapons could actually make it easier to follow international humanitarian law by making military action more precise.</p>
<p>Yet it’s highly speculative to say that “killer robots” will ever be able to follow humanitarian law at all. And while politicians continue to argue about this, the spread of autonomy and artificial intelligence in existing military technology is already effectively <a href="https://www.cambridge.org/core/journals/review-of-international-studies/article/autonomous-weapons-systems-and-changing-norms-in-international-relations/8E8CC29419AF2EF403EA02ACACFCF223">setting undesirable standards</a> for its role in the use of force.</p>
<p>A series of <a href="https://futureoflife.org/open-letter-autonomous-weapons/">open letters</a> by prominent researchers speaking out against weaponising artificial intelligence have helped bring the debate about autonomous military systems to public attention. The problem is that the debate is framed as if this technology is something from the future. In fact, the questions it raises are effectively already being addressed by existing systems.</p>
<p>Most air defence systems <a href="https://www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf">already have</a> significant autonomy in the targeting process, and military aircraft have highly automated features. This means “robots” are already involved in identifying and engaging targets.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Humans still press the trigger, but for how long?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/female-military-drone-operator-wide-shot-539931541?src=eQqZybPxaHhkvow-YSqfIA-1-1">Burlingham/Shutterstock</a></span>
</figcaption>
</figure>
<p>Meanwhile, another important question raised by current technology is missing from the ongoing discussion. Remotely operated drones are currently used by several countries’ militaries to drop bombs on targets. But we know from incidents <a href="https://www.law.columbia.edu/sites/default/files/microsites/human-rights-institute/files/The%20Civilian%20Impact%20of%20Drones.pdf">in Afghanistan and elsewhere</a> that drone images aren’t enough to clearly distinguish between civilians and combatants. We also know that current AI technology can contain significant bias that effects its decision making, often with <a href="http://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai/">harmful effects</a>. </p>
<p>As future fully autonomous aircraft are likely to be used in similar ways to drones, they will probably follow the practices laid out by drones. Yet states using existing autonomous technologies are excluding them from the wider debate by referring to them as “semi-autonomous” or so-called “legacy systems”. Again, this makes the issue of “killer robots” seem more futuristic than it really is. This also prevents the international community from taking a closer look at whether these systems are fundamentally appropriate under humanitarian law.</p>
<p>Several key principles of international humanitarian law require deliberate human judgements that machines <a href="https://thebulletin.org/landing_article/why-the-world-needs-to-regulate-autonomous-weapons-and-soon/">are incapable of</a>. For example, the legal definition of who is a civilian and who is a combatant isn’t written in a way that could be programmed into AI, and <a href="https://www.tandfonline.com/doi/abs/10.1080/15027570.2010.537903">machines lack</a> the situational awareness and ability to infer things necessary to make this decision.</p>
<h2>Invisible decision making</h2>
<p>More profoundly, the more that targets are chosen and potentially attacked by machines, the less we know about how those decisions are made. Drones <a href="https://www.theguardian.com/science/the-lay-scientist/2016/feb/18/has-a-rampaging-ai-algorithm-really-killed-thousands-in-pakistan">already rely heavily</a> on intelligence data processed by “black box” algorithms that are very difficult to understand to choose their proposed targets. This <a href="http://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/">makes it harder</a> for the human operators who actually press the trigger to question target proposals.</p>
<p>As the UN continues to debate this issue, it’s worth noting that most countries in favour of banning autonomous weapons are developing countries, which are typically <a href="http://www.article36.org/wp-content/uploads/2016/04/A36-Disarm-Dev-Marginalisation.pdf">less likely</a> to attend international disarmament talks. So the fact that they are willing to speak out strongly against autonomous weapons makes their doing so all the more significant. Their history of experiencing interventions and invasions from richer, more powerful countries (such as some of the ones in favour of autonomous weapons) also reminds us that they are most at risk from this technology.</p>
<p>Given what we know about existing autonomous systems, we should be very concerned that “killer robots” will make breaches of humanitarian law more, not less, likely. This threat can only be prevented by negotiating new international law curbing their use.</p><img src="https://counter.theconversation.com/content/102736/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ingvild Bode receives funding from the Joseph Rowntree Charitable Trust. </span></em></p>
The debate on autonomous weapons isn’t paying enough attention to the technology already in use.
Ingvild Bode, Senior Lecturer in International Relations, University of Kent
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/101427
2018-08-21T10:32:05Z
2018-08-21T10:32:05Z
Ban ‘killer robots’ to protect fundamental moral and legal principles
<figure><img src="https://images.theconversation.com/files/232107/original/file-20180815-2909-5xtnkd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The U.S. military is already testing a Modular Advanced Armed Robotic System.</span> <span class="attribution"><a class="source" href="https://www.marforpac.marines.mil/Exercises/RIMPAC/RIMPAC-Photos/igphoto/2001572635/">Lance Cpl. Julien Rodarte, U.S. Marine Corps</a></span></figcaption></figure><p>When drafting a <a href="https://www.britannica.com/event/Hague-Conventions">treaty on the laws of war</a> at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language. </p>
<p>This standard, known as the <a href="https://www.icrc.org/eng/resources/documents/article/other/57jnhy.htm">Martens Clause</a>, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”</p>
<p>I was the lead author of a <a href="https://www.hrw.org/node/321376">new report</a> by <a href="https://www.hrw.org/">Human Rights Watch</a> and the <a href="http://hrp.law.harvard.edu/">Harvard Law School International Human Rights Clinic</a> that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these <a href="https://theconversation.com/losing-control-the-dangers-of-killer-robots-58262">weapons</a>.</p>
<p>Representatives of more than 70 nations will gather from August 27 to 31 at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the <a href="https://www.unog.ch/80256EE600585943/(httpPages)/4F0DEF093B4860B4C1257180004B1B30?OpenDocument">Convention on Conventional Weapons</a>, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.</p>
<h2>Making rules for the unknowable</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=712&fit=crop&dpr=1 600w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=712&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=712&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=895&fit=crop&dpr=1 754w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=895&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/232104/original/file-20180815-2918-y4vzrw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=895&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Russian diplomat Fyodor Fyodorovich Martens, for whom the Martens Clause is named.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Friedrich_Fromhold_Martens_(1845-1909).jpg">Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.</p>
<p>Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are <a href="https://www.stopkillerrobots.org/wp-content/uploads/2018/03/KRC_Briefing_CCWApr2018.pdf">all working to develop</a> them. They argue that the technology would process information faster and keep soldiers off the battlefield.</p>
<p>The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.</p>
<h2>Principles of humanity</h2>
<p>The history of the Martens Clause shows that it is a fundamental principle of international humanitarian law. Originating in the <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=9FE084CDAC63D10FC12563CD00515C4D">1899 Hague Convention</a>, versions of it appear in all four <a href="https://www.icrc.org/eng/assets/files/publications/icrc-002-0173.pdf#page=83">Geneva Conventions</a> and <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=6C86520D7EFAD527C12563CD0051D63C">Additional Protocol I</a>. It is cited in <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=056FD614A7D05D90C12563CD0051EC75">numerous</a> <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=CB3CAB98FF67D28EC12574C60038D63C">disarmament</a> <a href="https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=6D8BF0E4ABD74D62C125825D004955B1">treaties</a>. In 1995, concerns under the Martens Clause motivated countries to adopt a <a href="https://ihl-databases.icrc.org/ihl/INTRO/570">preemptive ban on blinding lasers</a>. </p>
<p>The principles of humanity require humane treatment of others and respect for human life and dignity. Fully autonomous weapons could not meet these requirements because they would be unable to feel compassion, an emotion that inspires people to minimize suffering and death. The weapons would also lack the legal and ethical judgment necessary to ensure that they protect civilians in complex and unpredictable conflict situations.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=436&fit=crop&dpr=1 600w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=436&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=436&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=548&fit=crop&dpr=1 754w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=548&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/232108/original/file-20180815-2924-75wyif.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=548&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Under human supervision – for now.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Marine_Corps_Warfighting_Laboratory_MAGTAF_Integrated_Experiment_(MCWL)_160709-M-OB268-165.jpg">Pfc. Rhita Daniel, U.S. Marine Corps</a></span>
</figcaption>
</figure>
<p>In addition, as inanimate machines, these weapons could not truly understand the value of an individual life or the significance of its loss. Their algorithms would translate human lives into numerical values. By making lethal decisions based on such algorithms, they would reduce their human targets – whether civilians or soldiers – to objects, undermining their human dignity.</p>
<h2>Dictates of public conscience</h2>
<p>The growing opposition to fully autonomous weapons shows that they also conflict with the dictates of public conscience. Governments, experts and the general public have all objected, often on moral grounds, to the possibility of losing human control over the use of force.</p>
<p>To date, <a href="https://www.stopkillerrobots.org/wp-content/uploads/2018/04/KRC_CountryViews_13Apr2018.pdf">26 countries</a> have expressly supported a ban, including China. <a href="https://www.theguardian.com/commentisfree/2018/apr/11/killer-robot-weapons-autonomous-ai-warfare-un">Most countries</a> that have spoken at the U.N. meetings on conventional weapons have called for maintaining some form of meaningful human control over the use of force. Requiring such control is effectively the same as banning weapons that operate without a person who decides when to kill.</p>
<p>Thousands of <a href="https://futureoflife.org/open-letter-autonomous-weapons/">scientists and artificial intelligence experts</a> have endorsed a prohibition and demanded action from the United Nations. In July 2018, they issued a <a href="https://futureoflife.org/lethal-autonomous-weapons-pledge/">pledge not to assist</a> with the development or use of fully autonomous weapons. <a href="https://www.clearpathrobotics.com/2014/08/clearpath-takes-stance-against-killer-robots/">Major corporations</a> have also called for the prohibition.</p>
<p>More than 160 <a href="https://www.paxforpeace.nl/stay-informed/news/religious-leaders-call-for-a-ban-on-killer-robots">faith leaders</a> and more than 20 <a href="https://nobelwomensinitiative.org/nobel-peace-laureates-call-for-preemptive-ban-on-killer-robots/?ref=204">Nobel Peace Prize laureates</a> have similarly condemned the technology and backed a ban. Several <a href="http://www.openroboethics.org/wp-content/uploads/2015/11/ORi_LAWS2015.pdf">international</a> and <a href="http://duckofminerva.dreamhosters.com/wp-content/uploads/2013/06/UMass-Survey_Public-Opinion-on-Autonomous-Weapons.pdf">national</a> public opinion polls have found that a majority of people who responded opposed developing and using fully autonomous weapons.</p>
<p>The <a href="https://www.stopkillerrobots.org/">Campaign to Stop Killer Robots</a>, a coalition of 75 nongovernmental organizations from 42 countries, has led opposition by nongovernmental groups. Human Rights Watch, for which I work, co-founded and coordinates the campaign.</p>
<h2>Other problems with killer robots</h2>
<p>Fully autonomous weapons would <a href="https://www.hrw.org/sites/default/files/report_pdf/arms1216_web.pdf">threaten more</a> than humanity and the public conscience. They would likely violate other key rules of international law. Their use would create a gap in accountability because no one could be held individually liable for the unforeseeable actions of an autonomous robot.</p>
<p>Furthermore, the existence of killer robots would spark widespread proliferation and an arms race – dangerous developments made worse by the fact that fully autonomous weapons would be vulnerable to hacking or technological failures.</p>
<p>Bolstering the case for a ban, our Martens Clause assessment highlights in particular how delegating life-and-death decisions to machines would violate core human values. Our report finds that there should always be meaningful human control over the use of force. We urge countries at this U.N. meeting to work toward a new treaty that would save people from lethal attacks made without human judgment or compassion. A clear ban on fully autonomous weapons would reinforce the longstanding moral and legal foundations of international humanitarian law articulated in the Martens Clause.</p><img src="https://counter.theconversation.com/content/101427/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bonnie Docherty works as a senior researcher in the Arms Division of Human Rights Watch.</span></em></p>
A standard element of international humanitarian law since 1899 should guide countries as they consider banning lethal autonomous weapons systems.
Bonnie Docherty, Lecturer on Law and Associate Director of Armed Conflict and Civilian Protection, International Human Rights Clinic, Harvard Law School, Harvard University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/94124
2018-04-18T12:57:49Z
2018-04-18T12:57:49Z
Five reasons why robots won’t take over the world
<figure><img src="https://images.theconversation.com/files/214556/original/file-20180412-592-1hnncl5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/download/confirm/651441412?src=km2LvQIDh_rWeN0LiP87jA-1-10&size=huge_jpg">Shutterstock</a></span></figcaption></figure><p>Scientists are known for making dramatic predictions about the future – and sinister <a href="https://theconversation.com/uk/topics/robots-6403">robots</a> are once again in the spotlight now that <a href="https://theconversation.com/uk/topics/artificial-intelligence-90">artificial intelligence</a> has become a marketing tool for all sorts of different brands. </p>
<p>At the end of World War Two, it was stated that <a href="https://som.yale.edu/blog/peter-thiel-at-yale-we-wanted-flying-cars-instead-we-got-140-characters">flying cars</a> were just around the corner and that all energy problems would be solved by <a href="https://motherboard.vice.com/en_us/article/wnxj9q/why-widespread-fusion-energy-is-taking-so-damn-long">fusion energy</a> by the end of the 20th century. But decades on, we don’t seem much closer to either of those predictions coming true. </p>
<p>So what’s with all this talk – <a href="https://www.cnbc.com/2017/12/18/9-mind-blowing-things-elon-musk-said-about-robots-and-ai-in-2017.html">fuelled</a> by the likes of space baron, Elon Musk – about robots taking over the world? </p>
<p>Pessimists <a href="https://www.bloomberg.com/news/articles/2016-01-18/rise-of-the-robots-will-eliminate-more-than-5-million-jobs">predict</a> that robots will jeopardise jobs across the globe, and not only in industrial production. They claim robot journalists, robot doctors and robot lawyers will replace human experts. And, as a consequence of a <a href="https://www.commondreams.org/views/2016/12/05/what-robots-are-doing-middle-class">melting down middle class</a>, there will be mass poverty and political instability. </p>
<p>Optimists <a href="http://www.dw.com/en/robots-paradise-on-earth/av-36191047">predict a new paradise</a> where all the tedious problems of human relationships can be overcome by having a perfect life with easily replaceable robot partners, which will fulfil our basic needs as well as our deepest longings. And “work” will become an ancient concept. </p>
<p>The pessimists, however, can relax and the optimists need to cool their boots. As experts in the field of robotics, we believe that robots will be much more visible in the future, but – at least over the next two decades – they will be clearly recognisable as machines. </p>
<p>This is because there is still a long way to go before robots will be able to match a number of fundamental human skills. Here are five reasons why robots aren’t about to take over the world. </p>
<h2>1. Human-like hands</h2>
<p>Scientists are far from replicating the complexity of human hands. The hands of robots that are used today in real applications are clumsy. The more sophisticated hands developed in labs are not robust enough and lack the dexterity of human hands. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/214559/original/file-20180412-549-7ljgm5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/214559/original/file-20180412-549-7ljgm5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=370&fit=crop&dpr=1 600w, https://images.theconversation.com/files/214559/original/file-20180412-549-7ljgm5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=370&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/214559/original/file-20180412-549-7ljgm5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=370&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/214559/original/file-20180412-549-7ljgm5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=466&fit=crop&dpr=1 754w, https://images.theconversation.com/files/214559/original/file-20180412-549-7ljgm5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=466&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/214559/original/file-20180412-549-7ljgm5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=466&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Comparison of a human hand with a robotic one.</span>
<span class="attribution"><a class="source" href="http://www.shadowrobot.com/wp-content/uploads/2012/12/Human-and-Hand-332x205.jpg">Wikimedia</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<h2>2. Tactile perception</h2>
<p>There is no technical match for the magnificent human and animal skin that encompasses a variety of tactile sensors. This perception is required for complex manipulation. Also, the software that processes the input from the sensors in robots is nowhere near as sophisticated as the human brain when it comes to interpretation and reaction to the messages received from the tactile sensors.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/214558/original/file-20180412-570-1bgqjdk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/214558/original/file-20180412-570-1bgqjdk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/214558/original/file-20180412-570-1bgqjdk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/214558/original/file-20180412-570-1bgqjdk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/214558/original/file-20180412-570-1bgqjdk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/214558/original/file-20180412-570-1bgqjdk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/214558/original/file-20180412-570-1bgqjdk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Sophia, a humanoid robot, ‘speaking’ at an event in Moscow, Russia in 2017.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/moscow-russia-october-1-2017-sophia-747212605">Shutterstock</a></span>
</figcaption>
</figure>
<h2>3. Control of manipulation</h2>
<p>Even if we had artificial hands comparable to human hands and sophisticated artificial skin, we would still need to be able to design a way to control them to manipulate objects in a human-like way. Human children take years to do this and the learning mechanisms are not understood.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/214563/original/file-20180412-587-1ck7azf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/214563/original/file-20180412-587-1ck7azf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/214563/original/file-20180412-587-1ck7azf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/214563/original/file-20180412-587-1ck7azf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/214563/original/file-20180412-587-1ck7azf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/214563/original/file-20180412-587-1ck7azf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/214563/original/file-20180412-587-1ck7azf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Children study modern robots at an exhibition.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/download/confirm/710786497?src=CGfLjSYqtn4SIxydp9d71Q-1-99&size=huge_jpg">Shutterstock</a></span>
</figcaption>
</figure>
<h2>4. Human and robot interaction</h2>
<p>The interaction between humans is built on well-functioning speech and object recognition systems, as well as other sensors such as smell and taste and tactile sensing. While there has been significant progress in speech and object recognition, today’s systems can still only be used in rather controlled environments when a high degree of performance is required.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/214565/original/file-20180412-577-qu2h92.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/214565/original/file-20180412-577-qu2h92.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/214565/original/file-20180412-577-qu2h92.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/214565/original/file-20180412-577-qu2h92.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/214565/original/file-20180412-577-qu2h92.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/214565/original/file-20180412-577-qu2h92.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/214565/original/file-20180412-577-qu2h92.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A robot offers assistance in a shopping mall.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/download/confirm/688183171?src=btE8K4P15N0epL9DFyDYUA-1-63&size=huge_jpg">Shutterstock</a></span>
</figcaption>
</figure>
<h2>5. Human reason</h2>
<p>Not all of what is technically possible needs to be built. Human reason could decide not to fully develop such robots, because of their potential harm to society. If, in many decades from now, the technical problems mentioned above are overcome so that complex human-like robots could be built, regulations could still prevent misuse.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/214570/original/file-20180412-570-18x3skh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/214570/original/file-20180412-570-18x3skh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/214570/original/file-20180412-570-18x3skh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/214570/original/file-20180412-570-18x3skh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/214570/original/file-20180412-570-18x3skh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/214570/original/file-20180412-570-18x3skh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/214570/original/file-20180412-570-18x3skh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The brains have it.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/download/confirm/630414293?src=1n4JzyB_qYM2wSVB-2kv8A-1-4&size=huge_jpg">Shutterstock</a></span>
</figcaption>
</figure>
<h2>Smooth out the edges</h2>
<p>In our research project, <a href="http://smooth-robot.dk/da/forside-3/">SMOOTH</a>, we design robots that we hope will operate in elderly care institutions by 2022. These robots will be used to solve repetitive tasks involving human and robot interaction, such as transporting laundry and waste, offering water to people or guiding them to the breakfast table. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/robot-cities-three-urban-prototypes-for-future-living-90281">Robot cities: three urban prototypes for future living</a>
</strong>
</em>
</p>
<hr>
<p>It was necessary to simplify the robots as well as to carefully select the tasks they perform to ensure that they can be commercially viable products within four years.</p>
<p>Our approach wasn’t to solve the first three problems of human-like hands, tactile perception and control of manipulation, but to avoid those robotic roadblocks. </p>
<p>To address the fourth problem of human and robot interaction, we chose repetitive tasks to reduce complexity, since the expected interactions are – to a certain degree – predictable. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/fUyU3lKzoio?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Robots are a reality today in industry and they will appear in public spaces in more complex shapes than robot vacuum cleaners. But in the next two decades, robots will not be human-like, even if they might look like humans. Instead they will remain sophisticated machines. </p>
<p>So you can stand down from any fear of a robot uprising in the near future.</p><img src="https://counter.theconversation.com/content/94124/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Norbert Krüger has received funding from the EU and various Danish national fonds, in particular the Innovation Fund Denmark.</span></em></p><p class="fine-print"><em><span>Ole Dolriis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Robots can’t achieve high fives all round without human-like hands, tactile perception, manipulation control, seamless interaction and human reason, experts say.
Norbert Krüger, Professor, University of Southern Denmark
Ole Dolriis, Robotics lecturer, University of Southern Denmark
Licensed as Creative Commons – attribution, no derivatives.