tag:theconversation.com,2011:/uk/topics/autonomous-systems-41700/articlesAutonomous systems – The Conversation2023-01-31T19:12:20Ztag:theconversation.com,2011:article/1966642023-01-31T19:12:20Z2023-01-31T19:12:20ZOur future could be full of undying, self-repairing robots. Here’s how<figure><img src="https://images.theconversation.com/files/507247/original/file-20230131-24-1wnmot.jpg?ixlib=rb-1.1.0&rect=419%2C14%2C4109%2C2200&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">frank60/Shutterstock</span></span></figcaption></figure><p>With generative artificial intelligence (AI) systems such as <a href="https://theconversation.com/chatgpt-dall-e-2-and-the-collapse-of-the-creative-process-196461">ChatGPT</a> and <a href="https://theconversation.com/ai-image-generation-is-advancing-at-astronomical-speeds-can-we-still-tell-if-a-picture-is-fake-191674">StableDiffusion</a> being the talk of the town right now, it might feel like we’ve taken a giant leap closer to a sci-fi reality where AIs are physical entities all around us.</p>
<p>Indeed, computer-based AI appears to be advancing at an unprecedented rate. But the rate of advancement in robotics – which we could think of as the potential physical embodiment of AI – is slow.</p>
<p>Could it be that future AI systems will need robotic “bodies” to interact with the world? If so, will nightmarish ideas like the self-repairing, shape-shifting <a href="https://en.wikipedia.org/wiki/T-1000">T-1000 robot</a> from the Terminator 2 movie come to fruition? And could a robot be created that could “live” forever?</p>
<h2>Energy for ‘life’</h2>
<p>Biological lifeforms like ourselves need energy to operate. We get ours via a combination of food, water, and oxygen. The majority of plants also need access to light to grow.</p>
<p>By the same token, an everlasting robot needs an ongoing energy supply. Currently, electrical power dominates energy supply in the world of robotics. Most robots are powered by the <a href="https://blog.mentyor.com/chemistry-of-batteries/">chemistry of batteries</a>. </p>
<p>An alternative battery type has been proposed that uses <a href="https://www.popularmechanics.com/science/green-tech/a35970222/radioactive-diamond-battery-will-run-for-28000-years/">nuclear waste and ultra-thin diamonds at its core</a>. The inventors, a San Francisco startup called <a href="https://ndb.technology/">Nano Diamond Battery</a>, claim a possible battery life of tens of thousands of years. Very small robots would be an ideal user of such batteries.</p>
<p>But a more likely long-term solution for powering robots may involve different chemistry – and even biology. In 2021, scientists from the Berkeley Lab and UMAss Amherst in the US demonstrated tiny nanobots could get their energy from chemicals in the <a href="https://newscenter.lbl.gov/2021/12/08/liquid-robots-never-run-out/">liquid they swim in</a>.</p>
<p>The researchers are now working out how to scale up this idea to larger robots that can work on solid surfaces.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/BdS72O2c9nQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Repairing and copying oneself</h2>
<p>Of course, an undying robot might still need occasional repairs.</p>
<p>Ideally, a robot would repair itself if possible. In 2019, a Japanese research group demonstrated <a href="https://robots.ieee.org/robots/pr2/">a research robot called PR2</a> tightening its <a href="https://ieeexplore.ieee.org/document/9035045">own screw using a screwdriver</a>. This is like self-surgery! However, such a technique would only work if non-critical components needed repair.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/47NjYRWVjLk?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Other research groups are exploring how soft robots can self-heal when damaged. A group in Belgium showed how a robot they developed recovered after being stabbed six times in one of its legs. It stopped for a few minutes until its skin healed itself, <a href="https://www.newscientist.com/article/2350609-self-healing-robot-recovers-from-being-stabbed-then-walks-off/">and then walked off</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/KTJaxxzTKYc?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Another unusual concept for repair is to use other things a robot might find in the environment to replace its broken part.</p>
<p>Last year, scientists reported how <a href="https://www.popularmechanics.com/technology/robots/a40746165/dead-spider-leg-grippers/">dead spiders can be used as robot grippers</a>. This form of robotics is known as “necrobotics”. The idea is to use dead animals as ready-made mechanical devices and attach them to robots to become part of the robot.</p>
<figure class="align-center ">
<img alt="A video of a spider attached to a syringe being lowered onto another spider and picking it up" src="https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=472&fit=crop&dpr=1 600w, https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=472&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=472&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=593&fit=crop&dpr=1 754w, https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=593&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=593&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The proof-of-concept in necrobotics involved taking a dead spider and ‘reanimating’ its hydraulic legs with air, creating a surprisingly strong gripper.</span>
<span class="attribution"><span class="source">Preston Innovation Laboratory/Rice University</span></span>
</figcaption>
</figure>
<h2>A robot colony?</h2>
<p>From all these recent developments, it’s quite clear that in principle, a single robot may be able to live forever. But there is a very long way to go.</p>
<p>Most of the proposed solutions to the energy, repair and replication problems have only been demonstrated in the lab, in very controlled conditions and generally at tiny scales.</p>
<p>The ultimate solution may be one of large colonies or swarms of tiny robots who share a common brain, or mind. After all, this is exactly how many species of insects have evolved.</p>
<p>The concept of the “mind” of an ant colony has been pondered for decades. Research published in 2019 showed ant colonies themselves have a form of memory that is <a href="https://aeon.co/ideas/an-ant-colony-has-memories-that-its-individual-members-dont-have">not contained within any of the ants</a>.</p>
<p>This idea aligns very well with one day having massive clusters of robots that could use this trick to replace individual robots when needed, but keep the cluster “alive” indefinitely.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A close-up swarm of orange ants forming a living bridge between two green leaves" src="https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Ant colonies can contain ‘memories’ that are distributed between many individual insects.</span>
<span class="attribution"><span class="source">frank60/Shutterstock</span></span>
</figcaption>
</figure>
<p>Ultimately, the scary robot scenarios outlined in countless science fiction books and movies are unlikely to suddenly develop without anyone noticing.</p>
<p>Engineering ultra-reliable hardware is extremely difficult, especially with complex systems. There are currently no engineered products that can last forever, or even for hundreds of years. If we do ever invent an undying robot, we’ll also have the chance to build in some safeguards.</p><img src="https://counter.theconversation.com/content/196664/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jonathan Roberts is Director of the Australian Cobotics Centre, the Technical Director of the Advanced Robotics for Manufacturing (ARM) Hub, and is a Chief Investigator at the QUT Centre for Robotics. He receives funding from the Australian Research Council. He was the co-founder of the UAV Challenge - an international drone competition.</span></em></p>If we’re going to put an AI brain somewhere, it’s likely going to be a robot. The next step – making that robot immortal.Jonathan Roberts, Professor in Robotics, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1885202022-08-21T20:03:06Z2022-08-21T20:03:06ZAustralia’s pursuit of ‘killer robots’ could put the trans-Tasman alliance with New Zealand on shaky ground<figure><img src="https://images.theconversation.com/files/479984/original/file-20220818-546-nyccc.jpg?ixlib=rb-1.1.0&rect=0%2C242%2C8986%2C4944&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Getty Images</span></span></figcaption></figure><p>Australia’s recently <a href="https://www.defence.gov.au/about/reviews-inquiries/defence-strategic-review">announced</a> defence review, intended to be the most thorough in almost four decades, will give us a good idea of how Australia sees its role in an increasingly tense strategic environment.</p>
<p>As New Zealand’s only formal military ally, Australia’s defence choices will have significant implications, both for New Zealand and regional geopolitics.</p>
<p>There are several areas of contention in the trans-Tasman relationship. One is Australia’s pursuit of nuclear-powered submarines, which clashes with New Zealand’s anti-nuclear stance. Another lies in the two countries’ diverging approaches to autonomous weapons systems (AWS), colloquially known as “killer robots”. </p>
<figure class="align-center ">
<img alt="Boeing Australia's autonomous 'loyal wingman' aircraft" src="https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/479242/original/file-20220816-20306-j1c4ti.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Boeing Australia is developing autonomous ‘loyal wingman’ aircraft to complement manned aircraft.</span>
<span class="attribution"><a class="source" href="https://www.flightglobal.com/defence/boeing-australia-pushes-loyal-wingman-maiden-flight-to-2021/141691.article">Boeing</a>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>In general, AWS are <a href="https://www.beehive.govt.nz/sites/default/files/2021-11/Autonomous-Weapons-Systems-Cabinet-paper.pdf">considered</a> to be “weapons systems that, once activated, can select and engage targets without further human intervention”. There is, however, no internationally agreed definition.</p>
<p>New Zealand is involved with international attempts to ban and regulate AWS. It seeks a ban on systems that “are not sufficiently predictable or controllable to meet legal or ethical requirements” and advocates for “rules or limits to govern the development and use of AWS”. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1424978867614228485"}"></div></p>
<p>If this seems vague to you, it should. This ambiguity in definition makes it difficult to determine which systems New Zealand seeks to ban or regulate.</p>
<h2>Australia’s prioritisation of AWS</h2>
<p>Australia, meanwhile, has been developing what it more commonly refers to as robotics and autonomous systems (RAS) with <a href="https://www.tandfonline.com/doi/full/10.1080/10357718.2022.2095615">gusto</a>. Since 2016, Australia has identified RAS as a priority area of development and substantially increased <a href="https://www.dst.defence.gov.au/nextgentechfund">funding</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/new-zealand-could-take-a-global-lead-in-controlling-the-development-of-killer-robots-so-why-isnt-it-166168">New Zealand could take a global lead in controlling the development of 'killer robots' — so why isn't it?</a>
</strong>
</em>
</p>
<hr>
<p>The Australian <a href="https://www.navy.gov.au/sites/default/files/documents/RAN_WIN_RASAI_Strategy_2040f2_hi.pdf">navy</a>, <a href="https://researchcentre.army.gov.au/sites/default/files/2020-03/robototic_autonomous_systems_strategy.pdf">army</a> and defence force (<a href="https://tasdcrc.com.au/wp-content/uploads/2020/12/ADF-Concept-Robotics.pdf">ADF</a>) have each released concept documents since 2018, discussing RAS and their associated benefits, risks, challenges and opportunities.</p>
<p>Key systems Australia is pursuing include the autonomous aircraft <a href="https://news.defence.gov.au/service/introducing-ghost-bat">Ghost Bat</a>, three different kinds of <a href="https://www.australiandefence.com.au/defence/sea/navy-s-uncrewed-undersea-plans">extra-large underwater autonomous vehicles</a> and <a href="https://www.minister.defence.gov.au/minister/melissa-price/media-releases/autonomous-truck-project-passes-major-milestone">autonomous trucks</a>.</p>
<h2>Why is Australia seeking to develop these technologies?</h2>
<p>The short answer is three-fold: seeking military advantage, saving lives and economics.</p>
<p>Australia and its allies and partners, particularly the US, are <a href="https://www.ussc.edu.au/analysis/us-china-technology-competition-and-what-it-means-for-australia">fearful</a> of losing the technological superiority they have long held over rivals such as China. </p>
<p>Large military capabilities, like nuclear-powered submarines, take both time and money to acquire. Australia is further limited in what it can do by the size of its defence force. RAS are seen as a way to potentially maintain advantage, and to do more with less.</p>
<p>RAS are also seen as a way to save lives. A <a href="https://media.defense.gov/2020/Nov/23/2002540369/-1/-1/1/WYATT.PDF">survey</a> of Australian military personnel found they considered reduction of harm and injury to defence personnel, allied personnel and civilians among the most important potential benefits of RAS. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616">UN fails to agree on 'killer robot' ban as nations pour billions into autonomous weapons research</a>
</strong>
</em>
</p>
<hr>
<p>The Australian Defence Force also <a href="https://tasdcrc.com.au/wp-content/uploads/2020/12/ADF-Concept-Robotics.pdf">believes</a> RAS will be cheaper than large platforms. Inflation means money already committed to defence has less purchasing power. RAS present an opportunity to achieve the same outcomes at a lower cost.</p>
<p>Meanwhile, in 2018, the Australian government outlined its intention to become a top-ten <a href="https://www.ft.com/content/d743d758-04b2-11e8-9650-9c0ad2d7c5b5">defence exporter</a>. There are keen <a href="https://breakingdefense.com/2022/03/aussies-aim-for-1b-in-exports-of-loyal-wingman-now-ghost-bat/">hopes</a> the Ghost Bat will become a successful defence export. </p>
<p>At the same time, the government is keen to <a href="https://apo.org.au/sites/default/files/resource-files/2016-02/apo-nid93621.pdf">build</a> closer ties between defence, industry and academia. Industry and academia both vie for defence funding, and this drives development of RAS.</p>
<p>Of course, the technology is new. It’s not guaranteed RAS will save lives, save money or achieve military advantage. The extent to which RAS will be used, and what they will be used for, is not foreseeable. It is in this uncertainty that New Zealand must make judgments about AWS and alliance management.</p>
<figure class="align-center ">
<img alt="Armed Autonomous aerial vehicle on runway" src="https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/479985/original/file-20220818-164-hnhgr1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Autonomous systems are seen as a way to save lives.</span>
<span class="attribution"><span class="source">Getty Images</span></span>
</figcaption>
</figure>
<h2>What this means for the trans-Tasman relationship</h2>
<p>The nuclear-powered submarines captured attention when Australia’s new AUKUS partnership with the US and UK was announced, but its primary purpose is a much broader partnership that shares defence technology, including RAS. </p>
<p>The most recent statement from the AUKUS working groups <a href="https://www.gov.uk/government/news/readout-of-aukus-joint-steering-group-meetings--2">says</a> they “will seek opportunities to engage allies and close partners”. Last week, US Deputy Secretary of State Wendy Sherman made it clear New Zealand was one such <a href="https://www.rnz.co.nz/news/political/472583/us-would-have-conversations-with-new-zealand-if-time-comes-for-others-to-join-aukus-top-diplomat">partner</a>.</p>
<p>Australia’s focus on RAS, particularly in the context of AUKUS, may soon bring alliance questions to the fore. Strategic studies expert Robert Ayson has argued AUKUS, combined with increased strategic tension, <a href="https://pacforum.org/publication/pacnet-48-new-zealand-and-aukus-affected-without-being-included">means</a> that “year by year New Zealand’s alliance commitment to the defence of Australia will carry bigger implications”. AWS will play a role in these implications.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/nukes-allies-weapons-and-cost-4-big-questions-nzs-defence-review-must-address-188732">Nukes, allies, weapons and cost: 4 big questions NZ's defence review must address</a>
</strong>
</em>
</p>
<hr>
<p>AWS may seem an insignificant trans-Tasman difference compared to the use of nuclear technologies. But AWS come with a lot more uncertainty and fuzziness than, say, <a href="https://www.smh.com.au/world/oceania/not-in-our-waters-ardern-says-no-to-visits-from-australia-s-new-nuclear-subs-20210916-p58s7k.html">banning</a> nuclear-powered submarines in New Zealand waters. This fuzziness creates ample room for misperceptions and poor communication.</p>
<p>Trust in alliance relationships is easily damaged, and difficult to manage. Clear communication and ensuring a good understanding of each other’s positions is essential. The ambiguity of AWS makes these things difficult. </p>
<p>New Zealand and Australia may need to clarify their respective positions before Australia’s defence review is released next March. Otherwise, they run the risk of fuelling misunderstandings at a delicate moment for trans-Tasman relations.</p><img src="https://counter.theconversation.com/content/188520/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sian Troath receives funding from The Royal Society of New Zealand Marsden Fund.</span></em></p>Diverging views on automated weapons systems could make it difficult for Australia and New Zealand to manage military ties at a delicate time in trans-Tasman relations.Sian Troath, Postdoctoral fellow, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1709612021-11-23T18:56:16Z2021-11-23T18:56:16ZThe self-driving trolley problem: how will future AI systems make the most ethical choices for all of us?<figure><img src="https://images.theconversation.com/files/433389/original/file-20211123-19-1rrgiix.jpeg?ixlib=rb-1.1.0&rect=199%2C387%2C6790%2C3541&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Artificial intelligence (AI) is already making decisions in the fields of business, health care and manufacturing. But AI algorithms generally still get help from people applying checks and making the final call. </p>
<p>What would happen if AI systems had to make independent decisions, and ones that could mean life or death for humans? </p>
<p>Pop culture has long portrayed our general distrust of AI. In the 2004 sci-fi movie I, Robot, detective Del Spooner (played by Will Smith) is suspicious of robots after being rescued by one from a car crash, while a 12-year-old girl was left to drown. He <a href="https://www.imdb.com/title/tt0343818/characters/nm0371671">says</a>:</p>
<blockquote>
<p>I was the logical choice. It calculated that I had a 45% chance of survival. Sarah only had an 11% chance. That was somebody’s baby – 11% is more than enough. A human being would’ve known that.</p>
</blockquote>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1151548669830860802"}"></div></p>
<p>Unlike humans, robots lack a moral conscience and follow the “ethics” programmed into them. At the same time, human morality is highly variable. The “right” thing to do in any situation will depend on who you ask.</p>
<p>For machines to help us to their full potential, we need to make sure they <a href="https://www.moralmachine.net/">behave ethically</a>. So the question becomes: how do the ethics of AI developers and engineers influence the decisions made by AI? </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/after-75-years-isaac-asimovs-three-laws-of-robotics-need-updating-74501">After 75 years, Isaac Asimov's Three Laws of Robotics need updating</a>
</strong>
</em>
</p>
<hr>
<h2>The self-driving future</h2>
<p>Imagine a future with self-driving cars that are fully autonomous. If everything works as intended, the morning commute will be an opportunity to prepare for the day’s meetings, catch up on news, or sit back and relax. </p>
<p>But what if things go wrong? The car approaches a traffic light, but suddenly the brakes fail and the computer has to make a split-second decision. It can swerve into a nearby pole and kill the passenger, or keep going and kill the pedestrian ahead. </p>
<p>The computer controlling the car will only have access to limited information collected through car sensors, and will have to make a decision based on this. As dramatic as this may seem, we’re only a few years away from potentially facing such dilemmas. </p>
<p>Autonomous cars will generally provide safer driving, but accidents will be inevitable – especially in the foreseeable future, when these cars will be sharing the roads with human drivers and other road users. </p>
<p>Tesla <a href="https://techcrunch.com/2021/05/07/tesla-refutes-elon-musks-timeline-on-full-self-driving/#">does not yet produce</a> fully autonomous cars, although it plans to. In collision situations, Tesla cars don’t automatically operate or deactivate the Automatic Emergency Braking (AEB) system if a human driver is in control. </p>
<p>In other words, the driver’s actions are not disrupted – even if they themselves are causing the collision. Instead, if the <a href="https://www.forbes.com/sites/patricklin/2017/04/05/heres-how-tesla-solves-a-self-driving-crash-dilemma/?sh=1a3225616813">car detects a potential collision</a>, it sends alerts to the driver to take action. </p>
<p>In “autopilot” mode, however, the car should automatically brake for pedestrians. Some argue if the car can prevent a collision, then there is a moral obligation for it to override the driver’s actions in every scenario. But would we want an autonomous car to make this decision?</p>
<figure>
<iframe src="https://player.vimeo.com/video/192179726" width="500" height="281" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</figure>
<h2>What’s a life worth?</h2>
<p>What if a car’s computer could evaluate the relative “value” of the passenger in its car and of the pedestrian? If its decision considered this value, technically it would just be making a cost-benefit analysis. </p>
<p>This may sound alarming, but there are already technologies being developed that could allow for this to happen. For instance, the recently re-branded Meta (formerly Facebook) has highly evolved facial recognition that can easily identify individuals in a scene.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-will-drop-its-facial-recognition-system-but-heres-why-we-should-be-sceptical-171186">Facebook will drop its facial recognition system – but here's why we should be sceptical</a>
</strong>
</em>
</p>
<hr>
<p>If these data were incorporated into an autonomous vehicle’s AI system, the algorithm could place a dollar value on each life. This possibility is depicted in an extensive 2018 study conducted by experts at the Massachusetts Institute of Technology and colleagues. </p>
<p>Through the <a href="https://www.nature.com/articles/s41586-018-0637-6">Moral Machine</a> experiment, researchers posed various self-driving car scenarios that compelled participants to decide whether to kill a homeless pedestrian or an executive pedestrian. </p>
<p>Results revealed participants’ choices depended on the level of economic inequality in their country, wherein more economic inequality meant they were more likely to sacrifice the homeless man. </p>
<p>While not quite as evolved, such data aggregation is already in use with China’s <a href="https://www.businessinsider.com.au/china-social-credit-system-punishments-and-rewards-explained-2018-4">social credit</a> system, which decides what social entitlements people have. </p>
<p>The health-care industry is another area where we will see AI making decisions that could save or harm humans. Experts are increasingly <a href="https://theconversation.com/ai-could-be-our-radiologists-of-the-future-amid-a-healthcare-staff-crisis-120631">developing AI to spot anomalies</a> in <a href="https://www.aidoc.com/blog/5-ways-ai-can-assist-radiologists/#">medical imaging</a>, and to help physicians in prioritising medical care.</p>
<p>For now, doctors have the final say, but as these technologies become increasingly advanced, what will happen when a doctor and AI algorithm don’t make the same diagnosis? </p>
<p>Another example is an automated medicine reminder system. How should the system react if a patient refuses to take their medication? And how does that affect the patient’s autonomy, and the overall accountability of the system? </p>
<p>AI-powered drones and weaponry are also ethically concerning, as they can make the decision to kill. There are conflicting views on whether such technologies should be completely <a href="https://www.theguardian.com/news/2020/oct/15/dangerous-rise-of-military-ai-drone-swarm-autonomous-weapons">banned or regulated</a>. For example, the use of autonomous drones can be limited to surveillance. </p>
<p>Some have called for military robots to be programmed with ethics. But this raises issues about the programmer’s accountability in the case where a drone kills civilians by mistake.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/gun-toting-robo-dogs-look-like-a-dystopian-nightmare-thats-why-they-offer-a-powerful-moral-lesson-170267">Gun-toting robo-dogs look like a dystopian nightmare. That's why they offer a powerful moral lesson</a>
</strong>
</em>
</p>
<hr>
<h2>Philosophical dilemmas</h2>
<p>There have been many philosophical debates regarding the ethical decisions AI will have to make. The classic example of this is the <a href="https://theconversation.com/the-trolley-dilemma-would-you-kill-one-person-to-save-five-57111">trolley problem</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/bOpf6KcWYyw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>People often struggle to make decisions that could have a life-changing outcome. When evaluating how we react to such situations, one study reported choices can vary depending on <a href="https://www.technologyreview.com/2018/10/24/139313/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/">a range of factors</a> including the respondant’s age, gender and culture.</p>
<p>When it comes to AI systems, the algorithms training processes are critical to how they will work in the real world. A system developed in one country can be influenced by the views, politics, ethics and morals of that country, making it unsuitable for use in another place and time.</p>
<p>If the system was controlling aircraft, or guiding a missile, you’d want a high level of confidence it was trained with data that’s representative of the environment it’s being used in. </p>
<p>Examples of failures and bias in technology implementation have included <a href="https://www.iflscience.com/technology/this-racist-soap-dispenser-reveals-why-diversity-in-tech-is-muchneeded/">racist soap dispenser</a> and inappropriate <a href="https://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app">automatic image labelling</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"897756900753891328"}"></div></p>
<p>AI is not “good” or “evil”. The effects it has on people will depend on the ethics of its developers. So to make the most of it, we’ll need to reach a consensus on what we consider “ethical”.</p>
<p>While private companies, public organisations and research institutions have their own guidelines for ethical AI, the United Nations has recommended developing what they call “<a href="https://en.unesco.org/artificial-intelligence/ethics#recommendation">a comprehensive global standard-setting instrument</a>” to provide a global ethical AI framework – and ensure human rights are protected.</p><img src="https://counter.theconversation.com/content/170961/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Between driverless cars, autonomous weapons and AI-powered medical diagnostic tools, it seems there will be no shortage of ethically-complex situations involving AI in the future.Jumana Abu-Khalaf, Research Fellow in Computing and Security, Edith Cowan UniversityPaul Haskell-Dowland, Professor of Cyber Security Practice, Edith Cowan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1506882020-11-27T18:19:38Z2020-11-27T18:19:38ZBoeing 737 Max: why was it grounded, what has been fixed and is it enough?<p>The Boeing 737 Max began flying commercially in May 2017 but has been grounded for over a year and a half following two crashes within five months. On October 29 2018, <a href="https://www.bbc.com/news/world-asia-46014463">Lion Air Flight 610</a> took off from Jakarta. It quickly experienced problems in maintaining altitude, entered into an uncontrollable dive and crashed into the Java Sea about 13 minutes after takeoff. Then on March 10 2019, <a href="https://nymag.com/intelligencer/2019/04/what-passengers-experienced-on-the-ethiopian-airlines-flight.html">Ethiopian Airlines Flight 302</a> from Nairobi suffered similar problems, crashing into the desert around six minutes after leaving the runway.</p>
<p>In total, 346 people lost their lives. After the second crash, US regulator the Federal Aviation Administration (FAA) decided to ground all 737 Max planes, of which around 350 had been delivered at the time, while they investigated the causes of the accidents.</p>
<p>Now, 20 months later, the FAA <a href="http://news.aa.com/news/news-details/2020/Return-of-the-Boeing-737-MAX-to-service-OPS-DIS-11/default.aspx#:%7E:text=Today%2C%20the%20Federal%20Aviation%20Administration,its%20grounding%20in%20March%202019.&text=This%20includes%20investing%20in%20extensive,it%20returns%20to%20commercial%20use">has announced</a> that it is rescinding this order and has set out steps for the return of the aircraft to commercial service. Brazil has responded quickly, <a href="https://simpleflying.com/brazil-boeing-737-max-recertification/amp/">also approving</a> the 737 Max. So, what went wrong – and can we be confident that it has been fixed?</p>
<p>The causes of the two accidents were complex, but link mainly to the 737’s <a href="https://www.seattletimes.com/seattle-news/times-watchdog/the-inside-story-of-mcas-how-boeings-737-max-system-gained-power-and-lost-safeguards/">manoeuvring characteristics augmentation system</a> (MCAS), which was introduced to the 737 Max to manage changes in behaviour created by the plane having much larger engines than its predecessors.</p>
<p>There are some important points about the MCAS which we must consider when reviewing the “fixes”. The MCAS prevented stall (a sudden loss of lift due to the angle of the wing) by “pushing” the nose down. Stall is indicated through an angle of attack (AoA) sensor – the 737 Max is fitted with two, but MCAS only used one. If that AoA sensor failed, then the MCAS could <a href="https://www.york.ac.uk/assuring-autonomy/news/blog/accidental-autonomy/">activate when it shouldn’t</a>, unnecessarily pushing the nose down. The design meant that there was no automatic switch to the other AoA sensor, and MCAS kept working with the erroneous sensor values. This is what happened in both crashes.</p>
<p>The design of the MCAS meant that it was repeatedly activated if it determined that there was a risk of a stall. This meant that the nose was continually pushed down, making it hard for pilots to keep altitude or climb. The system was also hard to override. In both cases, the flight crews were unable to override the MCAS, although other crews had successfully managed to do so in similar situation, and this contributed to the two accidents.</p>
<h2>The fixes</h2>
<p>Have these things been fixed? The FAA has published an <a href="https://www.faa.gov/foia/electronic_reading_room/boeing_reading_room/media/737_RTS_Summary.pdf">extensive summary</a> explaining its decision. The MCAS software has been modified and now uses both AoA sensors, not one. The MCAS also now only activates once, rather than multiple times, when a potential stall is signalled by both the AoA sensors. Pilots are provided with an “AoA disagree warning” which indicates that there might be an erroneous activation of MCAS. This warning was not standard equipment at the time of the two accidents – it had to be purchased by airlines as an option. </p>
<p>Importantly, pilots will now be trained on the operation of the MCAS and management of its problems. Pilots claimed that initially they were <a href="https://www.bbc.co.uk/news/business-48281282">not even told</a> that MCAS existed. This training will have to be approved by the FAA.</p>
<p>So, is all well? Probably. As the 737 Max accidents put Boeing and the FAA under such intense scrutiny, it is likely that the design and safety activities have been carried out and checked to the maximum extent possible. There is no such thing as perfection in such complex engineering processes, but it is clear that this has been an extremely intensive effort and that Boeing found and corrected a few other potential safety problems that were unrelated to the accidents. </p>
<p>Of course, we are not there yet. The more than 300 aircraft already delivered have to be modified, and the 450-or-so built but not delivered also need to be updated and checked by the FAA. Then the pilots need to be trained. And the airlines need passengers – but will they get them? That is an issue of trust.</p>
<h2>Safety culture and trust</h2>
<p>The <a href="https://www.youtube.com/watch?v=ep7oLR1xCW0">US Congressional Enquiry</a> was scathing about the culture at both Boeing and the FAA and the difficulty of the FAA in overseeing Boeing’s work. <a href="https://fisher.osu.edu/blogs/leadreadtoday/blog/a-textbook-case-for-disaster-psychological-safety-and-the-737-max">Some commentators</a> have also referred to an absence of psychological safety: “The assurance that one can speak up, offer ideas, point out problems, or deliver bad news without fear of retribution.” We have evidence that the engineering problems have been fixed, but safety culture is more nebulous and slow to change. </p>
<p>How would we know if trust has been restored? There are several possible indicators. </p>
<p>Due to the effects of COVID-19, airlines are running a reduced flight schedule, so they may not need to use the 737 Max. If they choose not to do so, despite its reduced operating costs compared to earlier 737 models, that will be telling. Certainly, all eyes will be on the first airline to return the aircraft to the skies. </p>
<p>Some US airlines <a href="https://simpleflying.com/how-to-tell-if-youre-flying-on-the-boeing-737-max/">have said</a> they will advise people which model of aircraft they will be flying. If passengers opt to avoid the 737 Max, that will speak volumes about public trust and confidence. </p>
<p>The FAA <a href="https://www.faa.gov/news/updates/?newsId=93206">press release</a> also says there has been an “unprecedented level of collaborative and independent reviews by aviation authorities around the world”. But if the international authorities ask for further checks or delay the reintroduction of the aircraft in their jurisdictions, that will be particularly significant as it reflects the view of the FAA’s professional peers. Brazil’s rapid response is a positive sign for this international engagement.</p>
<p>Hopefully, the first few years will prove uneventful and trust can be rebuilt. But only time will tell.</p><img src="https://counter.theconversation.com/content/150688/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>John McDermid receives, or has received, funding from government research agencies, industry including in the aerospace sector and the Lloyd's Register Foundation, relevant to the safety of aircraft and autonomous systems. He has not received any funding directly relevant to the Boeing 737 Max.</span></em></p>Almost two years after crashing twice within five months and being pulled out of service, the Boeing 737 Max’s return to the skies has now been approved.John McDermid, Director, Assuring Autonomy International Programme, University of YorkLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1294272020-03-06T13:02:33Z2020-03-06T13:02:33ZAutonomous vehicles can be fooled to ‘see’ nonexistent obstacles<figure><img src="https://images.theconversation.com/files/312623/original/file-20200129-92977-12wfpkc.png?ixlib=rb-1.1.0&rect=18%2C22%2C1219%2C896&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">LiDAR helps an autonomous vehicle 'visualize' what's around it.</span> <span class="attribution"><span class="source">Yulong Can with data from Baidu Apollo</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>Nothing is more important to an autonomous vehicle than sensing what’s happening around it. Like human drivers, autonomous vehicles need the ability to make instantaneous decisions. </p>
<p>Today, most autonomous vehicles rely on multiple sensors to perceive the world. Most systems use a combination of cameras, radar sensors and LiDAR (light detection and ranging) sensors. On board, computers fuse this data to create a comprehensive view of what’s happening around the car. Without this data, autonomous vehicles would have no hope of safely navigating the world. Cars that use multiple sensor systems both work better and are safer – each system can serve as a check on the others – but no system is immune from attack.</p>
<p>Unfortunately, these systems are not foolproof. Camera-based perception systems can be tricked simply by <a href="https://spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms">putting stickers on traffic signs to completely change their meaning</a>. </p>
<p>Our work, from the <a href="http://vhosts.eecs.umich.edu/robustnet//about.html">RobustNet Research Group</a> at the University of Michigan with computer scientist <a href="https://scholar.google.com/citations?user=lcsu7m8AAAAJ">Qi Alfred Chen</a> from UC Irvine and colleagues from <a href="https://spqr.eecs.umich.edu">the SPQR lab</a>, has shown that the LiDAR-based perception system can be comprised, too. </p>
<p>By strategically spoofing the LiDAR sensor signals, the attack is able to fool the vehicle’s LiDAR-based perception system into “seeing” a nonexistent obstacle. If this happens, a vehicle could cause a crash by blocking traffic or braking abruptly.</p>
<h2>Spoofing LiDAR signals</h2>
<p>LiDAR-based perception systems have two components: the sensor and the machine learning model that processes the sensor’s data. A LiDAR sensor calculates the distance between itself and its surroundings by emitting a light signal and measuring how long it takes for that signal to bounce off an object and return to the sensor. This duration of this back-and-forth is also known as the “time of flight.”</p>
<p>A LiDAR unit sends out tens of thousands of light signals per second. Then its machine learning model uses the returned pulses to paint a picture of the world around the vehicle. It is similar to how a bat uses echolocation to know where obstacles are at night.</p>
<p>The problem is these pulses can be spoofed. To fool the sensor, an attacker can shine his or her own light signal at the sensor. That’s all you need to get the sensor mixed up.</p>
<p>However, it’s more difficult to spoof the LiDAR sensor to “see” a “vehicle” that isn’t there. To succeed, the attacker needs to precisely time the signals shot at the victim LiDAR. This has to happen at the nanosecond level, since the signals travel at the speed of light. Small differences will stand out when the LiDAR is calculating the distance using the measured time-of-flight. </p>
<p>If an attacker successfully fools the LiDAR sensor, it then also has to trick the machine learning model. Work done at the OpenAI research lab <a href="https://openai.com/blog/adversarial-example-research/">shows that</a> machine learning models are vulnerable to specially crafted signals or inputs – what are known as adversarial examples. For example, specially generated stickers on traffic signs can fool camera-based perception.</p>
<p>We found that an attacker could use a similar technique to craft perturbations that work against LiDAR. They would not be a visible sticker, but spoofed signals specially created to fool the machine learning model into thinking there are obstacles present when in fact there are none. The LiDAR sensor will feed the hacker’s fake signals to the machine learning model, which will recognize them as an obstacle.</p>
<p>The adversarial example – the fake object – could be crafted to meet the expectations of the machine learning model. For example, the attacker might create the signal of a truck that is not moving. Then, to conduct the attack, they might set it up at an intersection or place it on a vehicle that is driven in front of an autonomous vehicle.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/a6R6P3D70cE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A video illustration of the two methods used to trick the self-driving car’s AI.</span></figcaption>
</figure>
<h2>Two possible attacks</h2>
<p>To demonstrate the designed attack, we chose an autonomous driving system used by many car makers: Baidu <a href="http://apollo.auto/">Apollo</a>. This product has over 100 partners and has reached a mass production agreement with multiple manufacturers including <a href="https://techcrunch.com/2018/11/01/baidu-volvo-ford-autonomous-driving/">Volvo and Ford</a>. </p>
<p>By using real world sensor data collected by the Baidu Apollo team, we <a href="https://sites.google.com/umich.edu/advlidar/">demonstrated two different attacks</a>. In the first, an “emergency brake attack,” we showed how an attacker can suddenly halt a moving vehicle by tricking it into thinking an obstacle appeared in its path. In the second, an “AV freezing attack,” we used a spoofed obstacle to fool a vehicle that had been stopped at a red light to remain stopped after the light turns green.</p>
<p>By exploiting the vulnerabilities of autonomous driving perception systems, we hope to trigger an alarm for teams building autonomous technologies. Research into new types of security problems in the autonomous driving systems is just beginning, and we hope to uncover more possible problems before they can be exploited out on the road by bad actors.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/hYuvmwzqmsY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A simulated demonstration of two LiDAR spoofing attacks done by the researchers.</span></figcaption>
</figure>
<p>[ <em>You’re smart and curious about the world. So are The Conversation’s authors and editors.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=youresmart">You can read us daily by subscribing to our newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/129427/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Driverless vehicles rely heavily on sensors to navigate the world. They’re vulnerable to attack if bad actors trick them into ‘seeing’ things that aren’t there, potentially leading to deadly crashes.Yulong Cao, Ph.D. Candidate in Computer Science and Engineering, University of MichiganZ. Morley Mao, Professor of Electrical Engineering and Computer Science, University of MichiganLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1251942020-01-22T13:37:19Z2020-01-22T13:37:19ZWhat a bundle of buzzing bees can teach engineers about robotic materials<figure><img src="https://images.theconversation.com/files/308138/original/file-20191220-11904-19nzsit.jpg?ixlib=rb-1.1.0&rect=613%2C0%2C3987%2C3055&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Individuals working together as one.</span> <span class="attribution"><span class="source">Orit Peleg and Jacob Peters</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>Gathered inside a small shed in the midst of a peaceful meadow, my colleagues and I are about to flip the switch to start a seemingly mundane procedure: using a motor to shake a wooden board. But underneath this board, we have a swarm of roughly 10,000 honeybees, clinging to each other in a single magnificent pulsing cone.</p>
<p>As we share one last look of excited concern, the swarm, literally a chunk of living material, starts to move right and left, jiggling like jelly. </p>
<p>Who in their right minds would shake a honeybee swarm? My colleagues and I are studying swarms to deepen our understanding of these essential pollinators, and also to see how we can leverage that understanding in the world of robotics materials.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/307042/original/file-20191216-123998-19vuyqy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/307042/original/file-20191216-123998-19vuyqy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/307042/original/file-20191216-123998-19vuyqy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=299&fit=crop&dpr=1 600w, https://images.theconversation.com/files/307042/original/file-20191216-123998-19vuyqy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=299&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/307042/original/file-20191216-123998-19vuyqy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=299&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/307042/original/file-20191216-123998-19vuyqy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=376&fit=crop&dpr=1 754w, https://images.theconversation.com/files/307042/original/file-20191216-123998-19vuyqy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=376&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/307042/original/file-20191216-123998-19vuyqy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=376&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Honeybee swarms adapt to different branch shapes.</span>
<span class="attribution"><span class="source">Orit Peleg and Jacob Peters</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>Many bees create one swarm</h2>
<p>The swarms in our study occur as part of the <a href="https://www.scientificamerican.com/article/how-honeybees-find-a-home/">reproductive cycle of European honeybee colonies</a>. When the number of bees exceeds available resources, usually in the spring or summer, a colony divides into two groups. One group, and a queen, fly away in search of a new permanent location while the rest of the bees remain behind.</p>
<p>During that effort, the relocating bees temporarily form a highly adaptable swarm that can hang from tree branches, <a href="https://apnews.com/3565459f20454cf399310eccd3bfbadf/NYPD-bee-squad-ready-for-sting-operations-on-urban-swarms">roofs, fences or cars</a>. While suspended, they have no nest to protect them from the elements. Huddling together allows them <a href="https://doi.org/10.1098/rsif.2013.1033">to minimize heat loss to the colder outside environment</a>. They also need to adapt in real time to temperature variations, rain and wind – all of which could shatter the fragile protection they share as one unit.</p>
<p>The swarm is orders of magnitude larger than the size of an individual bee. A bee could potentially coordinate its activity with neighboring bees right next to it, but it certainly couldn’t coordinate directly with any bees at the far edge of the swarm.</p>
<p>So how do they manage to maintain mechanical stability in the face of something like strong wind – a test that requires near simultaneous coordination throughout the entire swarm?</p>
<p>My colleagues <a href="https://scholar.google.com/citations?user=YYtLjJoAAAAJ&hl=en&oi=sra">Jacob Peters</a>, <a href="https://scholar.google.com/citations?user=Xt6THm8AAAAJ&hl=en&oi=ao">Mary Salcedo</a>, <a href="https://scholar.google.com/citations?user=iiyj5MsAAAAJ&hl=en&oi=sra">L. Mahadevan</a> and I devised a series of experiments to address that question — which brings us back to intentionally shaking the swarm.</p>
<h2>Individual actions, whole swarm response</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/307045/original/file-20191216-124027-1k6kspl.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/307045/original/file-20191216-124027-1k6kspl.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/307045/original/file-20191216-124027-1k6kspl.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=717&fit=crop&dpr=1 600w, https://images.theconversation.com/files/307045/original/file-20191216-124027-1k6kspl.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=717&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/307045/original/file-20191216-124027-1k6kspl.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=717&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/307045/original/file-20191216-124027-1k6kspl.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=901&fit=crop&dpr=1 754w, https://images.theconversation.com/files/307045/original/file-20191216-124027-1k6kspl.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=901&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/307045/original/file-20191216-124027-1k6kspl.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=901&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Examining the experimental setup, with the pyramidal swarm hanging from the bottom of the board.</span>
<span class="attribution"><span class="source">Orit Peleg and Jake Peters</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>When we shook the swarm along its horizontal axis, the bees adjusted the shape of their swarm and within minutes became a wider, more stable cone. However, when the motion was vertical, the shape remained constant until a critical force was reached that caused the swarm to break apart.</p>
<p>Why did the bees respond to horizontal shaking, but not to vertical shaking? It’s all about how the <a href="https://doi.org/10.1038/s41567-018-0262-1">bonds bees create by “holding hands”</a> get stretched.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/310361/original/file-20200115-134772-sfa6ua.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/310361/original/file-20200115-134772-sfa6ua.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/310361/original/file-20200115-134772-sfa6ua.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/310361/original/file-20200115-134772-sfa6ua.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/310361/original/file-20200115-134772-sfa6ua.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/310361/original/file-20200115-134772-sfa6ua.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/310361/original/file-20200115-134772-sfa6ua.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/310361/original/file-20200115-134772-sfa6ua.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Honeybees are essentially holding hands to create the dense swarm structure. How much the bonds between two bees stretch is important information that influences their actions for the good of the swarm.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/trust-teamwork-bees-linking-two-bee-262155599">Viesinsh/Shutterstock.com</a></span>
</figcaption>
</figure>
<p>It turns out vertical shaking doesn’t disrupt these pair bonds as much as horizontal shaking does. Using a computational model, we showed that bonds between bees located closer to where the swarm attaches to the board stretch more than bonds between bees at the far tip of the swarm. Bees could sense these different amounts of stretching, and use them as a directional signal to move upwards and make the swarm spread. </p>
<p>In other words, bees move from locations where bonds stretch less, to locations where they stretch more. This behavioral response improves the collective stability of the swarm as a whole at the expense of increasing the average burden experienced by the individual bee. The result is a kind of “mechanical altruism”, as the one bee endures strain for the benefit of the swarm’s greater good.</p>
<h2>Engineering lessons, taught by bees</h2>
<p><a href="https://scholar.google.com/citations?user=xH5Ryy4AAAAJ&hl=en&oi=ao">As a broadly trained physicist studying animal behavior</a>, I am fascinated by this kind of evolved solution in nature. It’s amazing that honeybees can create multi-functional materials – made of their many individual bodies – that can shape shift without a global conductor telling them all what to do. No one is in charge, but together they keep the swarm intact. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/jswSJznyvDI?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Bee swarms exhibit emergent intelligence, behaving as one unit.</span></figcaption>
</figure>
<p>What if engineers could take those solutions and lessons from nature and apply them to buildings? Instead of a bundle of buzzing bees, could you imagine a bundle of buzzing robots that cling on each other to create adaptive structures in real time? I can <a href="https://www.arch2o.com/hypercell-thesis-aadrl/">envision shelters</a> that deploy rapidly in the face of natural disasters like hurricanes, or construction materials that can sense an earthquake’s vibrations and respond in the same way that these swarms react to a branch in wind.</p>
<p>Essentially, these bees create an autonomous material that – embedded within itself – has multiple abilities. The swarm can sense information from the nearby environment, based on how much the pair bonds are stretching. It can compute, in the sense that it figures out which regions have more bond stretching. And it can actuate, meaning move in the direction toward more stretching.</p>
<p>These properties are some of the longstanding aspirations in the fields of <a href="https://doi.org/10.1126/science.1261689">multi-functional materials and robotics materials</a>. The idea is to combine affordable robots that each have a minimal amount of mechanical components and sensors, like the <a href="http://news.mit.edu/2019/self-transforming-robot-blocks-jump-spin-flip-identify-each-other-1030">M-blocks</a>. Together they can sense their local environment, interact with neighboring robots and make their own decisions on where to move next. As Hiro, the young roboticist in the Disney movie “<a href="https://youtu.be/ep2-W1X65KI?t=55">Big Hero 6</a>” says, “The applications to this tech are limitless.”</p>
<p>For the moment, <a href="https://www.thisiscolossal.com/2019/12/spatial-bodies-aujik/?fbclid=IwAR1QWvNE_NiEiVsZR6pWZRFGoknPjsex0Ji05L-OFTDXmv8WlXDtlkiy7d8">this is still science fiction</a>. But the more researchers know about the honeybees’ natural solutions, the closer we get to making that dream come true.</p>
<p>[ <em>Get the best of The Conversation, every weekend.</em> <a href="https://theconversation.com/us/newsletters/weekly-highlights-61?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=weeklybest">Sign up for our weekly newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/125194/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Orit Peleg receives funding from the Human Frontiers Science Program. </span></em></p>A swarm of honeybees can provide valuable lessons about how a group of many individuals can work together to accomplish a task, even with no one in charge. Roboticists are taking notes.Orit Peleg, Assistant Professor of Computer Science, University of Colorado BoulderLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1264832019-12-04T13:27:09Z2019-12-04T13:27:09ZRobotics researchers have a duty to prevent autonomous weapons<figure><img src="https://images.theconversation.com/files/304560/original/file-20191201-156120-1g0lx17.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Both the hardware and software of commercial drones can be changed easily.</span> <span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Drones-Regulations/a4f3ca06278c49f58504551dadaf0faf/6/0">AP Photo/Seth Wenig</a></span></figcaption></figure><p>Robotics is rapidly being transformed by advances in artificial intelligence. And the benefits are widespread: We are seeing safer vehicles with the <a href="https://www.subaru.com/engineering/eyesight.html">ability to automatically brake in an emergency</a>, robotic arms <a href="https://www.vox.com/2017/5/26/15656120/manufacturing-jobs-automation-ai-us-increase-robot-sales-reshoring-offshoring">transforming factory lines that were once offshored</a> and <a href="https://www.starship.xyz/">new robots</a> that can do everything from shop for groceries to <a href="https://www.wired.com/story/postmates-delivery-robot-serve/">deliver prescription drugs</a> to people who have trouble doing it themselves.</p>
<p>But our ever-growing appetite for intelligent, autonomous machines poses a host of ethical challenges.</p>
<h2>Rapid advances have led ethical dilemmas</h2>
<p>These ideas and more were swirling as my colleagues and <a href="https://scholar.google.com/citations?user=-YOtPcIAAAAJ&hl=en">I</a> met in early November at one of the world’s largest autonomous robotics-focused research conferences – <a href="https://www.ieee-ras.org/about-ras/ras-calendar/event/1141-iros-2019-international-conference-on-intelligent-robots-and-systems">the IEEE International Conference on Intelligent Robots and Systems</a>. There, academics, corporate researchers, and government scientists presented developments in algorithms that allow robots to make their own decisions.</p>
<p>As with all technology, the range of future uses for our research is difficult to imagine. It’s even more challenging to forecast given how quickly this field is changing. Take, for example, the ability for a computer to identify objects in an image: in 2010, the state of the art was successful <a href="https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf">only about half of the time</a>, and it was stuck there for years. Today, though, the best algorithms as shown in published papers <a href="https://paperswithcode.com/sota/image-classification-on-imagenet">are now at 86% accuracy</a>. That advance alone allows autonomous robots to understand what they are seeing through the camera lenses. It also shows the rapid pace of progress over the past decade due to developments in AI.</p>
<p>This kind of improvement is a true milestone from a technical perspective. Whereas in the past manually reviewing troves of video footage would require an incredible number of hours, now such data can be rapidly and accurately parsed by a computer program.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=393&fit=crop&dpr=1 600w, https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=393&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=393&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=494&fit=crop&dpr=1 754w, https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=494&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/304561/original/file-20191201-156095-1ictzjm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=494&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">San Francisco became the first U.S. city to ban the use of facial recognition technology by police and other city agencies. This same technology can be coupled with drones, which are becoming more autonomous.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Facial-Recognition-Backlash/5d8a15313554488986c7eb5c2401c9d7/9/0">AP Photo/Eric Risberg</a></span>
</figcaption>
</figure>
<p>But it also gives rise to an ethical dilemma. In removing humans from the process, the assumptions that underpin the decisions <a href="https://www.wilsoncenter.org/sites/default/files/ai_and_privacy.pdf">related to privacy and security have been fundamentally altered</a>. For example, the use of cameras in public streets may have raised privacy concerns 15 or 20 years ago, but adding accurate facial recognition technology dramatically alters those privacy implications.</p>
<h2>Easy to modify systems</h2>
<p>When developing machines that can make own decisions – typically called autonomous systems – the ethical questions that arise are arguably more concerning than those in object recognition. AI-enhanced autonomy is developing so rapidly that capabilities which were once limited to highly engineered systems are now available to anyone with a household toolbox and some computer experience. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=395&fit=crop&dpr=1 600w, https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=395&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=395&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=496&fit=crop&dpr=1 754w, https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=496&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/304562/original/file-20191201-156112-1ydr1i2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=496&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Commercial drones allow for many beneficial uses, such as delivering medicine or spraying for mosquitoes.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Zanzibar-Drones-Fight-Malaria/84a8645acf2a4b78bfd017e683e048a2/5/0">AP Photo/Haroub Hussein</a></span>
</figcaption>
</figure>
<p>People with no background in computer science can <a href="https://www.kaggle.com/learn/overview">learn some of the most state-of-the-art artificial intelligence tools</a>, and robots are more than willing to let you <a href="https://developer.dji.com/onboard-sdk/documentation/sample-doc/advanced-sensing-object-detection.html">run your newly acquired machine learning techniques</a> on them. There are online forums filled with people <a href="https://www.quora.com/What-is-the-best-way-to-understand-the-basics-of-robotics">eager to help anyone learn how to do this</a>.</p>
<p>With earlier tools, it was already easy enough to program your minimally modified drone <a href="https://www.instructables.com/id/Vision-Based-Object-Tracking-and-Following-Using-3/">to identify a red bag and follow it</a>. <a href="http://openaccess.thecvf.com/content_cvpr_2017/html/Redmon_YOLO9000_Better_Faster_CVPR_2017_paper.html">More recent object detection technology</a> unlocks the ability to track a range of things that resemble more than 9,000 different object types. Combined with <a href="https://spectrum.ieee.org/automaton/robotics/drones/skydios-new-drone-is-smaller-even-smarter-and-almost-affordable">newer, more maneuverable drones</a>, it’s not hard to imagine how easily they could be equipped with weapons. What’s to stop someone from strapping an explosive or another weapon to a drone equipped with this technology? </p>
<p>Using a variety of techniques, autonomous drones are already a threat. They have been caught <a href="https://www.washingtonpost.com/news/checkpoint/wp/2017/06/14/isis-drones-are-attacking-u-s-troops-and-disrupting-airstrikes-in-raqqa-officials-say/">dropping explosives on U.S. troops</a>, <a href="https://techcrunch.com/2019/08/29/climate-activists-plan-to-use-drones-to-shut-down-heathrow-airport-next-month/">shutting down airports</a> and <a href="https://www.nytimes.com/2018/08/10/world/americas/venezuela-video-analysis.html">being used in an assassination attempt on Venezuelan leader Nicolas Maduro</a>. The autonomous systems that are being developed right now could make staging such attacks easier and more devastating.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/gEnG2tv5LJM?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Reports indicate that the Islamic State is using off-the-shelf drones, some of which are being used for bombings.</span></figcaption>
</figure>
<h2>Regulation or review boards?</h2>
<p>About a year ago, a group of researchers in artificial intelligence and autonomous robotics <a href="https://futureoflife.org/lethal-autonomous-weapons-pledge/">put forward a pledge</a> to refrain from developing lethal autonomous weapons. They defined lethal autonomous weapons as platforms that are capable of “selecting and engaging targets without human intervention.” As a robotics researcher who isn’t interested in developing autonomous targeting techniques, <a href="https://www.colorado.edu/irt/autonomous-systems/2018/08/08/cu-engineering-faculty-respond-lethal-autonomous-weapons-pledge">I felt that the pledge missed the crux of the danger</a>. It glossed over important ethical questions that need to be addressed, especially those at the broad intersection of drone applications that could be either benign or violent.</p>
<p>For one, the researchers, companies and developers who wrote the papers and built the software and devices generally aren’t doing it to create weapons. However, they might inadvertently enable others, with minimal expertise, to create such weapons. </p>
<p>What can we do to address this risk?</p>
<p>Regulation is one option, and one already used by banning aerial drones near airports or around national parks. Those are helpful, but they don’t prevent the creation of weaponized drones. Traditional weapons regulations are not a sufficient template, either. They generally tighten controls on the source material or the manufacturing process. That would be nearly impossible with autonomous systems, where the source materials are widely shared computer code and the manufacturing process can take place at home using off-the-shelf components. </p>
<p>Another option would be to follow in the footsteps of biologists. In 1975, they held <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC432675/">a conference on the potential hazards of recombinant DNA</a> at Asilomar in California. There, experts agreed to voluntary guidelines that would direct the course of future work. For autonomous systems, such an outcome seems unlikely at this point. Many research projects that could be used in the development of weapons also have peaceful and incredibly useful outcomes.</p>
<p>A third choice would be to establish self-governance bodies at the organization level, such as the <a href="https://www.fda.gov/regulatory-information/search-fda-guidance-documents/institutional-review-boards-frequently-asked-questions">institutional review boards</a> that currently oversee studies on human subjects at companies, universities and government labs. These boards consider the benefits to the populations involved in the research and craft ways to mitigate potential harms. But they can regulate only research done within their institutions, which limits their scope. </p>
<p>Still, a large number of researchers would fall under these boards’ purview – within the autonomous robotics research community, nearly every presenter at technical conferences are members of an institution. Research review boards would be a first step toward self-regulation and could flag projects that could be weaponized.</p>
<h2>Living with the peril and promise</h2>
<p>Many of my colleagues and I are excited to develop the next generation of autonomous systems. I feel that the potential for good is too promising to ignore. But I am also concerned about the risks that new technologies pose, especially if they are exploited by malicious people. Yet with some careful organization and informed conversations today, I believe we can work toward achieving those benefits while limiting the potential for harm.</p>
<p>[<em><a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=youresmart">You can read us daily by subscribing to our newsletter</a>.</em>]</p><img src="https://counter.theconversation.com/content/126483/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Christoffer Heckman receives funding from the Defense Advanced Research Projects Agency and the National Science Foundation.</span></em></p>Modified commercial drones are getting more powerful and can easily be turned into weapons. A researcher argues for ways to prevent their development.Christoffer Heckman, Assistant Professor of Computer Science, University of Colorado BoulderLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1194992019-07-22T10:55:31Z2019-07-22T10:55:31ZWaiting for an undersea robot in Antarctica to call home<figure><img src="https://images.theconversation.com/files/281448/original/file-20190626-76705-w53a62.jpg?ixlib=rb-1.1.0&rect=0%2C310%2C5184%2C3135&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">One of two underwater gliders is deployed from a research ship into Antarctic waters.</span> <span class="attribution"><span class="source">NOAA</span></span></figcaption></figure><p>“Call! Just call!” I think loudly in my head. “Did something happen? Are you okay?”</p>
<p>I might seem like a worried parent waiting for a teenager to report in from an unsupervised outing. Rather, I’m a <a href="https://www.linkedin.com/in/jenmariewalsh">research biologist</a> with the Antarctic Ecosystem Research Division at the National Oceanic and Atmospheric Administration. It’s late February 2019, and I am waiting for an autonomous underwater glider in Antarctica to surface and call me via satellite, so I can give it new diving instructions. The longest it’s supposed to go without surfacing is eight hours, and it’s now been nine.</p>
<p>Did it get stuck under an iceberg? An underwater ledge? I feel so helpless; I’m 9,000 miles away in San Diego and all I can do is chew my fingernails and think, “No. This can’t happen. We can’t lose this glider so close to the end.” </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/281837/original/file-20190628-94720-cx387f.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/281837/original/file-20190628-94720-cx387f.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/281837/original/file-20190628-94720-cx387f.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=565&fit=crop&dpr=1 600w, https://images.theconversation.com/files/281837/original/file-20190628-94720-cx387f.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=565&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/281837/original/file-20190628-94720-cx387f.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=565&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/281837/original/file-20190628-94720-cx387f.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=711&fit=crop&dpr=1 754w, https://images.theconversation.com/files/281837/original/file-20190628-94720-cx387f.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=711&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/281837/original/file-20190628-94720-cx387f.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=711&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The survey area where gliders measured Antarctic krill populations.</span>
<span class="attribution"><span class="source">NOAA</span></span>
</figcaption>
</figure>
<p>Our research team is two-and-a-half months into a three-month-long mission just north of the Antarctic Peninsula. This is our first time deploying gliders so far from home, and our hope for a successful field season – not to mention a great deal of research – depends on recovering the two gliders our group deployed in December 2018. The gliders are now full of oceanographic data that will help us provide scientific advice on how best to conserve the Antarctic ecosystem as the area around the peninsula warms faster than almost any other region on Earth, which may adversely affect the animals that live there.</p>
<h2>9 hours, 30 minutes: No call</h2>
<p>For over 30 years, the <a href="https://swfsc.noaa.gov/textblock.aspx?id=551&ParentMenuId=42">NOAA group I’m part of</a> has conducted studies to estimate how many Antarctic krill, small shrimp-like creatures that support the diverse Antarctic food web, live around the Antarctic Peninsula.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/281449/original/file-20190626-76734-1ycpivt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/281449/original/file-20190626-76734-1ycpivt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/281449/original/file-20190626-76734-1ycpivt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=467&fit=crop&dpr=1 600w, https://images.theconversation.com/files/281449/original/file-20190626-76734-1ycpivt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=467&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/281449/original/file-20190626-76734-1ycpivt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=467&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/281449/original/file-20190626-76734-1ycpivt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=586&fit=crop&dpr=1 754w, https://images.theconversation.com/files/281449/original/file-20190626-76734-1ycpivt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=586&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/281449/original/file-20190626-76734-1ycpivt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=586&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Antarctic krill, <em>Euphausia superba</em>, can grow up to about 2.5 inches long.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Krill666.jpg">Uwe Kils/Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Krill feeds penguins and seals that breed in this area every summer and whales and fishes that feed here year-round, while also supporting a major fishery. You may have seen bright-red dietary supplements made from krill oil prominently displayed at the pharmacy. Our data help establish catch limits for the krill fishery, ensuring enough krill remain in the ocean to maintain the population after all people and animals take what they need to make a living. Without good data to support fishery-management decisions, krill fishing could <a href="https://www.ccamlr.org/en/fisheries/krill-%E2%80%93-biology-ecology-and-fishing">undermine the food web</a> for which Antarctica is so well known, as demand for supplements and other <a href="https://bestmarketherald.com/krill-oil-market-demand-expected-to-raise-by-dietary-supplements-segment-in-upcoming-years/">krill products surges</a>.</p>
<h2>10 hours: No call</h2>
<p>Until three years ago, my program chartered a research vessel for a month each year to sail around the Antarctic Peninsula and <a href="https://swfsc.noaa.gov/contentblock.aspx?ID=14326&ParentMenuId=42">estimate the biomass of krill</a>. But after 2016, rising vessel costs eliminated our surveys. For our program to continue, we had to find a creative way to collect our data in Antarctica without actually going to Antarctica. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/281776/original/file-20190628-94724-w5a3pn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/281776/original/file-20190628-94724-w5a3pn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/281776/original/file-20190628-94724-w5a3pn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/281776/original/file-20190628-94724-w5a3pn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/281776/original/file-20190628-94724-w5a3pn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/281776/original/file-20190628-94724-w5a3pn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/281776/original/file-20190628-94724-w5a3pn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/281776/original/file-20190628-94724-w5a3pn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An autonomous glider in the ocean.</span>
<span class="attribution"><span class="source">NOAA</span></span>
</figcaption>
</figure>
<p>Our solution was to use autonomous underwater gliders, which can be deployed in just a few hours by a small team from a ship in Antarctica, and then recovered months later. Gliders can dive to 3,000 feet, cover thousands of miles and follow commands from anywhere in the world with a laptop and an internet connection. Their batteries last six months, which means that they can collect much more data for much less money than a bunch of scientists on a research vessel. </p>
<p>The gliders resemble torpedoes in appearance, but contain three massive batteries and an array of scientific sensors that collect much of the same data we used to collect from a ship. Although the gliders are able to transmit small amounts of data via satellite throughout the deployment, the most valuable data are stored on the glider. If we lose a glider, which is always a possibility when you let something roam free in the ocean unattended for months, then we also lose the data.</p>
<p>We had effectively replaced ourselves with drones. But would they work?</p>
<h2>12 hours: No call</h2>
<p>For most of our team, the transition just a year ago from annual research voyages to the aquatic versions of C-3PO and R2-D2 was exciting. Secretly, though, I was terrified. I had spent my career as a scientist collecting krill samples from research vessels for biochemical analyses of their tissues. Suddenly I found myself ousted by oceanographic robots full of cables, wires, circuit boards and all sorts of other technological gadgetry.</p>
<p>These are not what you’d call smart robots. A bit like human toddlers, they have some degree of self-awareness, but would destroy themselves without semi-constant monitoring and instructions on how deep to dive or where to go. Outside supervision is especially important in the Southern Ocean, which is full of seamounts, canyons, strong currents and, most importantly, icebergs. </p>
<p>You can’t glider-proof the ocean the way you can baby-proof a house, so I had to forget everything I knew about biochemistry and learn as much as I could about glider piloting in 10 short months.</p>
<h2>13 hours: No call</h2>
<p>All that training and practice felt like 10 minutes by the time we finally packed up the gliders and shipped them to the Southern Hemisphere for their first Antarctic deployments. The commands for how deep to dive and where to go seemed simple enough, but the gliders responded as unpredictably as the ocean itself. </p>
<p>A near-disastrous practice deployment in San Diego revealed how slowly they maneuver, particularly in strong currents. Piloting them felt like trying to drive a remote-control semi-truck through a go-kart course, which reinforced our apprehension about driving these things through the ocean all the way across the planet, in one of the most remote and treacherous oceans on Earth.</p>
<p>Never mind the wind and the currents and the icebergs. What made this deployment far scarier was that if things started to go horribly wrong, we had no way to get the gliders back. It was like dropping a toddler off at college on another continent: What if he needs you and you can’t get to him?</p>
<h2>14 hours: No call</h2>
<p>Almost exactly 10 months from our first day of glider training, we carried the gliders across the Drake Passage on a research vessel bound for the Antarctic Peninsula. The deployments were flawless, and over the next few days, our confidence began to build. We quickly learned that icebergs were enemy number one, and they were formidable opponents. Satellite images of icebergs were <a href="https://www.polarview.aq/antarctic">available every couple of days</a>, and we overlaid maps of planned glider tracks onto those images so we could steer the gliders around any ice in their way. The trouble was, even the newest images we received were already a day old, and the ice had already moved.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/281839/original/file-20190628-94708-1yfhkyg.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/281839/original/file-20190628-94708-1yfhkyg.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/281839/original/file-20190628-94708-1yfhkyg.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=312&fit=crop&dpr=1 600w, https://images.theconversation.com/files/281839/original/file-20190628-94708-1yfhkyg.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=312&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/281839/original/file-20190628-94708-1yfhkyg.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=312&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/281839/original/file-20190628-94708-1yfhkyg.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=392&fit=crop&dpr=1 754w, https://images.theconversation.com/files/281839/original/file-20190628-94708-1yfhkyg.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=392&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/281839/original/file-20190628-94708-1yfhkyg.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=392&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">On this chart of the South Shetland Islands, one intended glider path is marked in straight gray lines. Circled in red in the middle is the iceberg the researchers called ‘Yacu.’</span>
<span class="attribution"><span class="source">NOAA</span></span>
</figcaption>
</figure>
<p>Smaller icebergs were usually avoidable, but around three weeks into the deployment, “Yacu” appeared on the scene. Inspired by a <a href="http://www.salem-news.com/articles/august162010/monster-amazon-ta.php">mythological South American snake</a> that eats everything in its way, that was the nickname we gave a 12.5-mile-wide iceberg from the Weddell Sea that drifted right into the path of one of the gliders. Yacu stuck around for the rest of the deployment, every few days spawning smaller (but still huge) icebergs that posed a constant and unpredictable threat to gliders already at the mercy of currents, tides and wind.</p>
<p>If a glider gets trapped under an obstacle and senses that it’s been underwater for too long, it drops an emergency weight to rocket itself to the surface for an immediate recovery. Once a glider drops its weight, it can’t dive anymore. So if it is trapped under ice, it’s likely to stay trapped under ice. And one way to know if a glider is trapped is that it stops calling in, because it can connect to satellites only when it’s at the surface.</p>
<h2>15 hours: No call</h2>
<p>And then…</p>
<p>Ding ding! Ding ding! My laptop screams at me after 16 long hours: The glider is at the surface.</p>
<p>It is well past 9 p.m., but every member of our five-person team has been glued to a computer since early afternoon, and we collectively sigh with relief. We now think the glider probably surfaced after the first eight hours, failed to connect to the satellite and resumed diving, which can occasionally happen. The reason for the gap is unimportant compared to our elation. A couple of weeks later, we successfully recovered both gliders on schedule and completed our first autonomous Antarctic field season. </p>
<p>One key finding is that we can, in fact, replace a vessel-based fishery assessment with a glider-based one in less than a year. With gliders, we can get krill biomass estimates comparable to those we would expect from a ship. That means we can use gliders to continue to provide critical data for managing the krill fishery.</p>
<p>This is a profound accomplishment for us and for NOAA, and it also has far-reaching promise for the future of fisheries research globally. The cost of science keeps going up, and autonomous instruments offer an affordable way to collect critical data for effectively managing ocean resources and conserving fragile marine ecosystems worldwide. </p>
<p>Our gliders are like toddlers in one final way: They’re advanced technology, yet they’re still in their infancy. Their ongoing usefulness to understand our changing planet in real time will depend on new sensors and instruments yet to be developed. What we accomplished is only the the tip of Yacu compared to what the future of autonomous oceanographic research holds.</p><img src="https://counter.theconversation.com/content/119499/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jennifer Walsh is employed and funded by the U.S. National Oceanic and Atmospheric Administration. The scientific results and conclusions, as well as any views or opinions expressed herein, are those of the author(s) and do not necessarily reflect the views of NOAA or the Department of Commerce.</span></em></p>Sending autonomous vehicles to the Southern Ocean can be fraught with anxiety, especially if one of them doesn’t make radio contact when it’s supposed to.Jennifer Walsh, Research Biologist, National Oceanic and Atmospheric AdministrationLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1130232019-03-19T18:50:37Z2019-03-19T18:50:37ZAutonomous transport will shape our cities’ future – best get on the right path early<figure><img src="https://images.theconversation.com/files/263562/original/file-20190313-86696-a73ced.jpg?ixlib=rb-1.1.0&rect=17%2C0%2C5890%2C3968&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Cities have a choice of autonomous vehicle futures: cars or mass transit vehicles. Which one we adopt is likely to determine how people-friendly our cities are.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/light-trails-perth-sunset-1214125723">SueBeDoo888/Shutterstock</a></span></figcaption></figure><p>A unique opportunity exists for infrastructure investment in Australia as transport as we know it faces disruption from autonomous vehicles.</p>
<p>Disruption is not a dirty word. Traditional transport models are being transformed for the better by savvy young upstarts: the taxi industry by Uber, for instance, and even bus services by on-demand provider Bridj in parts of Sydney.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/disruption-ahead-personal-mobility-is-breaking-down-old-transport-divides-70338">Disruption ahead: personal mobility is breaking down old transport divides</a>
</strong>
</em>
</p>
<hr>
<p>How do we manage this rapidly evolving technology, and what is the role of local government? </p>
<p>Autonomous vehicles will soon be a familiar sight in bush and city landscapes. In New South Wales the transport minister, <a href="https://www.smh.com.au/national/nsw/we-wont-need-train-and-bus-drivers-transport-ministers-prediction-20170816-gxxhsp.html">Andrew Constance</a>, predicted in 2017 that public transport might not be needed in future, certainly with no drivers, because autonomous cars will handle everything.</p>
<p>I don’t think this will happen. The car is a good servant, but a bad master in shaping our city, even autonomous ones.</p>
<h2>What will a city of autonomous cars look like?</h2>
<p>A fully car-based approach to autonomous vehicles would involve cars driving around suburbs day and night, searching for people to pick up on demand. These vehicles would move into corridors, main roads and freeways, travelling at high speeds with just a metre or so between them.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-winners-and-losers-in-the-race-for-driverless-cars-63874">The winners and losers in the race for driverless cars</a>
</strong>
</em>
</p>
<hr>
<p>Increased road capacity, safety and the very real prospect of solar-powered cars are undeniable benefits.</p>
<p>But what kind of city would we have? We would see more urban sprawl, possibly worse congestion and a departure from walkable cities.</p>
<p>We would lose an opportunity to reclaim pleasing city grids and urban centres. These spaces, which our city planners intended for pedestrians, have often been devoured by cars but are now returning to their rightful place as meeting spaces.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/smart-cities-does-this-mean-more-transport-disruptions-63638">Smart cities: does this mean more transport disruptions?</a>
</strong>
</em>
</p>
<hr>
<h2>The case for trackless trams</h2>
<p>Autonomous transit vehicles with a collective benefit to society offer us a chance to continue to reclaim these spaces by providing rapid shared mobility where it doesn’t exist today. This is why I like <a href="https://theconversation.com/why-trackless-trams-are-ready-to-replace-light-rail-103690">the trackless tram</a>: it has the high quality of autonomous transport like light rail, but at a tenth of the cost.</p>
<p>Trackless trams give us the capacity to not only catch up on years of under-investment in transport infrastructure, but also fund ambitious urban regeneration projects that will shape our future cities. This is what is driving <a href="https://www.youtube.com/watch?v=iqz9GXJuakU&amp=&t=4s">trackless tram studies</a> in Townsville, Sydney’s inner west, Wyndham in Melbourne and <a href="https://sbenrc.com.au/research-programs/1-62/">Perth</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-trackless-trams-are-ready-to-replace-light-rail-103690">Why trackless trams are ready to replace light rail</a>
</strong>
</em>
</p>
<hr>
<p>It’s also possible to use trackless trams to create new opportunities on the edges of our cities, like the <a href="https://www.planning.nsw.gov.au/Plans-for-your-area/Priority-Growth-Areas-and-Precincts/Western-Sydney-Aerotropolis">Western Sydney Aerotropolis</a>. There, Liverpool City Council wants to maximise the benefits of the new airport through transport connectivity back to the city’s CBD. Dr Tim Williams, Australasia cities leader at ARUP, declared Liverpool to be the <a href="https://www.smh.com.au/national/liverpool-the-surprise-star-of-australia-s-future-city-planning-20190224-p50zve.html">surprise star of Australia’s future city planning</a> for this reason. </p>
<p>Liverpool’s CBD is less than 18km away from the new airport site now under construction, but it might as well be a world away given the narrow roads and rural lands that currently separate the two.</p>
<p>NSW Opposition Leader Michael Daley has <a href="https://www.michaeldaley.com.au/labor_announces_new_rapid_transport_link_to_connect_liverpool_and_western_sydney_airport">committed A$10 million</a> towards preliminary work on a rapid transit link between the airport and Liverpool should he become premier after the March 23 election.</p>
<p>And Liverpool Council is investing significant resources to find out what these upgrades should be. This is an opportunity to embrace autonomous vehicles like trackless trams to create a strong link between the new airport and aerotropolis.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/western-sydney-aerotropolis-wont-build-itself-a-lot-is-riding-on-what-governments-do-97462">Western Sydney Aerotropolis won't build itself – a lot is riding on what governments do</a>
</strong>
</em>
</p>
<hr>
<h2>The role of city councils</h2>
<p>Historically, councils have often been the passive recipients of state and federal investments. But councils like Liverpool are recognising their role in championing infrastructure investment that will support high-quality future growth.</p>
<p>Councils are also identifying that they can control many of the mechanisms, particularly planning controls, that could be useful to minimise value leakage and <a href="https://theconversation.com/paying-for-infrastructure-means-using-land-value-capture-but-does-it-also-mean-more-tax-58731">maximise value capture for the common good</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/paying-for-infrastructure-means-using-land-value-capture-but-does-it-also-mean-more-tax-58731">Paying for infrastructure means using 'land value capture', but does it also mean more tax?</a>
</strong>
</em>
</p>
<hr>
<p>Developers are telling us that if we can give them up-front certainty on quality and timing of infrastructure and associated land development opportunities, then they can be willing partners in co-funding new transport connections like a trackless tram.</p>
<p>The challenge is to create partnerships with all levels of government, developers and the community, to focus the opportunities from current levels of infrastructure investment and enable bold rather than risk-averse approaches to the future.</p>
<p>New technology brings new challenges, but also new opportunities. For the sake of future generations, we need to get in before the window closes.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/utopia-or-nightmare-the-answer-lies-in-how-we-embrace-self-driving-electric-and-shared-vehicles-90920">Utopia or nightmare? The answer lies in how we embrace self-driving, electric and shared vehicles</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/113023/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Newman works for Curtin University on a project which involves several local governments, including the City of Liverpool, examining how trackless trams can enable urban regeneration. </span></em></p>Autonomous mass transit vehicles like ‘trackless trams’ are a better bet than autonomous cars to give us people-friendly cities that capture the value created by infrastructure for the common good.Peter Newman, Professor of Sustainability, Curtin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1097602019-03-05T11:38:06Z2019-03-05T11:38:06ZAutonomous drones can help search and rescue after disasters<figure><img src="https://images.theconversation.com/files/261996/original/file-20190304-92298-ugrhzx.jpg?ixlib=rb-1.1.0&rect=564%2C225%2C4814%2C2790&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Are there people down there who need help?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/aerial-view-national-disaster-texas-small-710897161">Roschetzky Photography/Shutterstock.com</a></span></figcaption></figure><p>When disasters happen – whether a natural disaster like a flood or earthquake, or a human-caused one like a mass shooting or bombing – it can be extremely dangerous to send first responders in, even though there are people who badly need help. </p>
<p>Drones are useful, and are helping in the recovery <a href="https://abcnews.go.com/US/tornadoes-alabama-kill-23-figure-officials-expect-rise/story?id=61448219">after the deadly Alabama tornadoes</a>, but most require individual pilots, who fly the unmanned aircraft by remote control. That limits how quickly rescuers can view an entire affected area, and can delay actual aid from reaching victims.</p>
<p>Autonomous drones could cover more ground more quickly, but would only be more effective if they were able on their own to help rescuers identify people in need. At the <a href="https://www.udayton.edu/engineering/research/centers/vision_lab/index.php">University of Dayton Vision Lab</a>, we are working on developing systems that can help spot people or animals – especially ones who might be trapped by fallen debris. Our technology mimics the behavior of a human rescuer, looking briefly at wide areas and quickly choosing specific regions to focus in on, to examine more closely. </p>
<h2>Looking for an object in a chaotic scene</h2>
<p>Disaster areas are often cluttered with downed trees, collapsed buildings, torn-up roads and other disarray that can make spotting victims in need of rescue very difficult.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/261973/original/file-20190304-92310-v7zhtg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/261973/original/file-20190304-92310-v7zhtg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/261973/original/file-20190304-92310-v7zhtg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=326&fit=crop&dpr=1 600w, https://images.theconversation.com/files/261973/original/file-20190304-92310-v7zhtg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=326&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/261973/original/file-20190304-92310-v7zhtg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=326&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/261973/original/file-20190304-92310-v7zhtg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=410&fit=crop&dpr=1 754w, https://images.theconversation.com/files/261973/original/file-20190304-92310-v7zhtg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=410&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/261973/original/file-20190304-92310-v7zhtg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=410&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Our system can spot people amid busy surroundings.</span>
<span class="attribution"><span class="source">University of Dayton Vision Lab</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>My research team has developed an artificial neural network system that can run in a computer onboard a drone. This system can emulate some of the excellent ways human vision works. It analyzes images captured by the drone’s camera and communicates notable findings to human supervisors.</p>
<p>First, our system processes the images to <a href="https://www.udayton.edu/engineering/research/centers/vision_lab/research/wide_area_surveillance/visibility_improvements.php">improve their clarity</a>. Just as humans <a href="https://scienceline.ucsb.edu/getkey.php?key=2577">squint their eyes</a> to adjust their focus, our technologies take detailed estimates of darker regions in a scene and computationally lighten the images. When images are too hazy or foggy, the system <a href="https://www.udayton.edu/engineering/research/centers/vision_lab/research/wide_area_surveillance/visibility_improvements.php">recognizes they’re too bright</a> and reduces the whiteness of the image to see the actual scene more clearly.</p>
<p>In a rainy environment, human brains use a brilliant strategy to see clearly. By noticing <a href="https://physics.stackexchange.com/questions/203576/why-can-we-see-through-rain">the parts of a scene that don’t change</a> – and the ones that do, as the raindrops fall – people can see reasonably well despite rain. Our technology uses the same strategy, continuously investigating the contents of each location <a href="http://doi.org/10.1007/s11263-006-0028-6">in a sequence of images</a> to get <a href="https://link.springer.com/article/10.1007%2Fs11263-014-0759-8">clear information</a> about the objects in that location. </p>
<p>We also have developed technology that can make images from a drone-borne camera <a href="https://www.udayton.edu/engineering/research/centers/vision_lab/research/wide_area_surveillance/visibility_improvements.php">larger, brighter and clearer</a>. By <a href="https://www.udayton.edu/engineering/research/centers/vision_lab/research/video_preprocessing/super_resolution.php">expanding the size</a> of the image, both algorithms and people can see key features more clearly.</p>
<h2>Confirming objects of interest</h2>
<p>Our system can identify people in various positions, such as lying prone or curled in the fetal position, even from different viewing angles and in varying lighting conditions. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/261975/original/file-20190304-92280-l9wfb9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/261975/original/file-20190304-92280-l9wfb9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/261975/original/file-20190304-92280-l9wfb9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=345&fit=crop&dpr=1 600w, https://images.theconversation.com/files/261975/original/file-20190304-92280-l9wfb9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=345&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/261975/original/file-20190304-92280-l9wfb9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=345&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/261975/original/file-20190304-92280-l9wfb9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=434&fit=crop&dpr=1 754w, https://images.theconversation.com/files/261975/original/file-20190304-92280-l9wfb9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=434&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/261975/original/file-20190304-92280-l9wfb9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=434&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Confusing and dim lighting can make it hard to identify people.</span>
<span class="attribution"><span class="source">University of Dayton Vision Lab</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>The human brain can look at one view of an object and <a href="https://psycnet.apa.org/record/2017-40276-001">envision how it would look from other angles</a>. When police issue an alert asking the public to look for someone, they often include a still photo – knowing that viewers’ minds will imagine three-dimensional views of how that person might look, and recognize them on the street, even if they don’t get the exact same view as the photo offered. We employ this strategy by computing three-dimensional models of people – either general human shapes or more detailed projections of specific people. Those models are used to match similarities <a href="https://www.udayton.edu/engineering/research/centers/vision_lab/research/human_identification/face_detection.php">when a person appears in a scene</a>.</p>
<p>We have also developed a way to detect parts of an object, without seeing the whole thing. Our system can be trained to detect and locate a leg sticking out from under rubble, a hand waving at a distance, or a head popping up above a pile of wooden blocks. It can tell a person or animal apart from a tree, bush or vehicle.</p>
<h2>Putting the pieces together</h2>
<p>During its initial scan of the landscape, our system mimics the approach of an airborne spotter, examining the ground to find possible objects of interest or regions worth further examination, and then looking more closely. For example, an aircraft pilot who is looking for a truck on the ground would typically pay less attention to lakes, ponds, farm fields and playgrounds – because trucks are less likely to be in those areas. Our autonomous technology employs the same strategy to focus the search area to the most significant regions in the scene.</p>
<p>Then our system investigates each selected region to obtain information about the shape, structure and texture of objects there. When it detects a set of features that matches a human being or part of a human, it flags that as a location of a victim. </p>
<p>The drone also collects GPS data about its location, and senses how far it is from other objects it’s photographing. That information lets the system calculate exactly the location of each person needing assistance, and alert rescuers.</p>
<p>All of this process – capturing an image, processing it for maximum visibility and analyzing it to identify people who might be trapped or concealed – takes about one-fifth of a second on the normal laptop computer that the drone carries, along with its high-resolution camera.</p>
<p>The U.S. military is interested in this technology. We have worked with the <a href="https://mrmc.amedd.army.mil/">U.S. Army Medical Research and Materiel Command</a> to find wounded individuals in a battlefield who need rescue. We have adapted this work to serve utility companies searching for <a href="https://www.udayton.edu/engineering/research/centers/vision_lab/research/scene_analysis_and_understanding/pipeline-intrusion-detection.php">intrusions on pipeline paths</a> by construction equipment or vehicles that may damage the pipelines. Utility companies are also interested in detecting any new constructions of buildings near the pipeline pathways. All of these groups – and many more – are interested in technology that can see as humans can see, especially in places humans can’t be.</p><img src="https://counter.theconversation.com/content/109760/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Vijayan Asari is affiliated with University of Dayton, Dayton, Ohio, USA.
Dr. Vijayan Asari is a Fellow of SPIE (Society of Photo-Optical Instrumentation Engineers) and a Senior Member of IEEE (Institute of Electrical and Electronics Engineers). He is a member of the IEEE Computational Intelligence Society (CIS), IEEE Internet of Things (IoT) Community, Society for Imaging Science and Technology (IS&T), and member of the Institute for Systems and Technologies of Information, Control and Communication (INSTICC). Dr. Asari is a co-organizer of several SPIE and IEEE conferences and workshops.
Dr. Asari advises graduate and undergraduate research students in Vision Lab at the University of Dayton.
Dr. Asari does not receive any funding for this specific research project. He uses internal funding for the human detection in complex environment research activity. Dr. Asari did receive funding from various organizations for several research activities that are linked to this research project. He received funding from the US Army Night Vision and Electronic Sensors Directorate (NVESD) for long range human detection in infrared imagery, from the US Army Medical Research and Materiel Command (USAMRMC) for detection of wounded individuals in war field (Research for Casualty Care and Management), from the Air Force Research Laboratory (AFRL) for object detection and tracking in wide area motion imagery, from Pacific Gas & Electric Company (PG&E) for automatic building change detection in satellite imagery, and from the Pipeline Research Council International (PRCI) for intrusion detection on pipeline right-of-ways.
</span></em></p>Drones already help with search and rescue, but teaching machines to identify victims on their own could free up human rescuers to do other crucial work.Vijayan Asari, Professor of Electrical and Computer Engineering, University of DaytonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1027362018-09-06T13:19:17Z2018-09-06T13:19:17ZAI has already been weaponised – and it shows why we should ban ‘killer robots’<figure><img src="https://images.theconversation.com/files/235215/original/file-20180906-190636-aogrro.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/unmanned-air-uav-spy-above-enemy-26952160?src=-ZOKXFCzFXCQZUjYk5R16g-1-16">Oleg Yarko/Shutterstock</a></span></figcaption></figure><p>A dividing line is emerging in the debate over so-called killer robots. Many countries want to see new international law on autonomous weapon systems that can target and kill people without human intervention. But those countries already developing such weapons are instead trying to highlight their supposed benefits.</p>
<p>I witnessed this growing gulf at a recent UN meeting of more than 70 countries <a href="https://www.unog.ch/80256EE600585943/(httpPages)/7C335E71DFCB29D1C1258243003E8724?OpenDocument">in Geneva</a>, where those in favour of autonomous weapons, including the US, Australia and South Korea, were more vocal than ever. At the meeting, <a href="https://www.unog.ch/80256EDD006B8954/(httpAssets)/D1A2BA4B7B71D29FC12582F6004386EF/$file/2018_GGE+LAWS_August_Working+Paper_US.pdf">the US claimed</a> that such weapons could actually make it easier to follow international humanitarian law by making military action more precise.</p>
<p>Yet it’s highly speculative to say that “killer robots” will ever be able to follow humanitarian law at all. And while politicians continue to argue about this, the spread of autonomy and artificial intelligence in existing military technology is already effectively <a href="https://www.cambridge.org/core/journals/review-of-international-studies/article/autonomous-weapons-systems-and-changing-norms-in-international-relations/8E8CC29419AF2EF403EA02ACACFCF223">setting undesirable standards</a> for its role in the use of force.</p>
<p>A series of <a href="https://futureoflife.org/open-letter-autonomous-weapons/">open letters</a> by prominent researchers speaking out against weaponising artificial intelligence have helped bring the debate about autonomous military systems to public attention. The problem is that the debate is framed as if this technology is something from the future. In fact, the questions it raises are effectively already being addressed by existing systems.</p>
<p>Most air defence systems <a href="https://www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf">already have</a> significant autonomy in the targeting process, and military aircraft have highly automated features. This means “robots” are already involved in identifying and engaging targets.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/235217/original/file-20180906-190673-hk5e4w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Humans still press the trigger, but for how long?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/female-military-drone-operator-wide-shot-539931541?src=eQqZybPxaHhkvow-YSqfIA-1-1">Burlingham/Shutterstock</a></span>
</figcaption>
</figure>
<p>Meanwhile, another important question raised by current technology is missing from the ongoing discussion. Remotely operated drones are currently used by several countries’ militaries to drop bombs on targets. But we know from incidents <a href="https://www.law.columbia.edu/sites/default/files/microsites/human-rights-institute/files/The%20Civilian%20Impact%20of%20Drones.pdf">in Afghanistan and elsewhere</a> that drone images aren’t enough to clearly distinguish between civilians and combatants. We also know that current AI technology can contain significant bias that effects its decision making, often with <a href="http://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai/">harmful effects</a>. </p>
<p>As future fully autonomous aircraft are likely to be used in similar ways to drones, they will probably follow the practices laid out by drones. Yet states using existing autonomous technologies are excluding them from the wider debate by referring to them as “semi-autonomous” or so-called “legacy systems”. Again, this makes the issue of “killer robots” seem more futuristic than it really is. This also prevents the international community from taking a closer look at whether these systems are fundamentally appropriate under humanitarian law.</p>
<p>Several key principles of international humanitarian law require deliberate human judgements that machines <a href="https://thebulletin.org/landing_article/why-the-world-needs-to-regulate-autonomous-weapons-and-soon/">are incapable of</a>. For example, the legal definition of who is a civilian and who is a combatant isn’t written in a way that could be programmed into AI, and <a href="https://www.tandfonline.com/doi/abs/10.1080/15027570.2010.537903">machines lack</a> the situational awareness and ability to infer things necessary to make this decision.</p>
<h2>Invisible decision making</h2>
<p>More profoundly, the more that targets are chosen and potentially attacked by machines, the less we know about how those decisions are made. Drones <a href="https://www.theguardian.com/science/the-lay-scientist/2016/feb/18/has-a-rampaging-ai-algorithm-really-killed-thousands-in-pakistan">already rely heavily</a> on intelligence data processed by “black box” algorithms that are very difficult to understand to choose their proposed targets. This <a href="http://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/">makes it harder</a> for the human operators who actually press the trigger to question target proposals.</p>
<p>As the UN continues to debate this issue, it’s worth noting that most countries in favour of banning autonomous weapons are developing countries, which are typically <a href="http://www.article36.org/wp-content/uploads/2016/04/A36-Disarm-Dev-Marginalisation.pdf">less likely</a> to attend international disarmament talks. So the fact that they are willing to speak out strongly against autonomous weapons makes their doing so all the more significant. Their history of experiencing interventions and invasions from richer, more powerful countries (such as some of the ones in favour of autonomous weapons) also reminds us that they are most at risk from this technology.</p>
<p>Given what we know about existing autonomous systems, we should be very concerned that “killer robots” will make breaches of humanitarian law more, not less, likely. This threat can only be prevented by negotiating new international law curbing their use.</p><img src="https://counter.theconversation.com/content/102736/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ingvild Bode receives funding from the Joseph Rowntree Charitable Trust. </span></em></p>The debate on autonomous weapons isn’t paying enough attention to the technology already in use.Ingvild Bode, Senior Lecturer in International Relations, University of KentLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/982492018-06-20T04:37:48Z2018-06-20T04:37:48ZWhat will freight and supply chains look like 20 years from now? Experts ponder the scenarios<p>The Australian government is developing a <a href="https://infrastructure.gov.au/transport/freight/national-strategy.aspx">national freight and supply chain strategy</a>. As part of that effort, we created a set of scenarios describing what Australia’s future might look like 20 years from now. In evaluations by a large number of experts of all the future drivers of change we identified, two emerged as the most powerful and uncertain: widespread use of automation, and increased pressure to become environmentally sustainable.</p>
<p>We also explored what Australia should do to remain successful in each of these possible futures. Each scenario was crafted as a rich description of the future, full of elements relevant to supply chains and freight. </p>
<p>To illustrate what the world might look like in each of these futures, several “news articles” accompany the scenarios. They tell us of a fleet of robots that deliver parcels by air and ground directly to Australian homes. They describe a container of Australian wines travelling from Victoria to Shanghai without human intervention, using <a href="https://www.smh.com.au/technology/autonomous-ghost-ships-are-coming-to-revolutionise-freight-20170905-gyatcs.html">autonomous ships</a> and <a href="https://theconversation.com/au/topics/autonomous-vehicles-1007">vehicles</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/guilt-free-online-shopping-can-parcel-deliveries-ever-be-truly-carbon-neutral-77629">Guilt-free online shopping: can parcel deliveries ever be truly carbon-neutral?</a>
</strong>
</em>
</p>
<hr>
<p>In one scenario, China has become the sole dominant power in its half of the planet. In another, the world economy has fragmented into blocks, with barely any trade between them. Cyber-attacks, terrorism and slander are used as weapons to disrupt supply chains in one scenario. In another, a whole new generation of consumers, the Alphas, demands high levels of service and fast delivery in everything they buy.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/223700/original/file-20180619-38819-18mkm61.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/223700/original/file-20180619-38819-18mkm61.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/223700/original/file-20180619-38819-18mkm61.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=336&fit=crop&dpr=1 600w, https://images.theconversation.com/files/223700/original/file-20180619-38819-18mkm61.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=336&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/223700/original/file-20180619-38819-18mkm61.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=336&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/223700/original/file-20180619-38819-18mkm61.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=422&fit=crop&dpr=1 754w, https://images.theconversation.com/files/223700/original/file-20180619-38819-18mkm61.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=422&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/223700/original/file-20180619-38819-18mkm61.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=422&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Container terminals have long used autonomous vehicles and machinery, and autonomous ships are on the way.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:9-028_Rotterdam_ECT.jpg">Quistnix/Wikimedia</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<h2>How did we create these scenarios?</h2>
<p>We started by asking 52 experts in freight and supply chains about things they expect will be different two decades from now. These interviews revealed more than 200 future drivers of change. We validated these in a survey with an even larger group of experts. </p>
<p>We then used 32 families of these drivers as the building blocks to create <a href="http://cscl.space/scenarios">four scenarios</a>:</p>
<ol>
<li><a href="http://cscl.space/scenarios/s1.pdf">The Rise of the Machines</a> – a world where technology dominates everything we do</li>
<li><a href="http://cscl.space/scenarios/s2.pdf">Enter the Dragon</a> – China is the dominant force in an increasingly fragmented world</li>
<li><a href="http://cscl.space/scenarios/s3.pdf">Flat, Crowded and Divided</a> – Australia’s population has soared, to the point that easy access to cheap labour
has nullified any hopes of a technological revolution</li>
<li><a href="http://cscl.space/scenarios/s4.pdf">Big Brother Goes Green</a> – the effects of climate change are increasingly real, and both governments and savvy consumers demand that companies meet high environmental standards. </li>
</ol>
<p>We made sure that each scenario was plausible and internally consistent. The scenarios were designed to be very different from the present and from each other, and to complement each other as a group.</p>
<p>While these scenarios are fun to read and thoroughly grounded in data, they are not predictions. Their purpose is not to forecast what the world <em>will</em> look like in 20 years. </p>
<p>Instead, the scenarios present us with several versions of what the world <em>might</em> look like. Their purpose is to help us <em>prepare</em> for what the future could bring. I like to think of scenario planning as a vaccine against future surprises.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/to-service-global-trade-todays-ships-and-cargo-are-smarter-than-ever-46032">To service global trade, today's ships and cargo are smarter than ever</a>
</strong>
</em>
</p>
<hr>
<p>The four scenarios served as the stage for a series of workshops conducted across Australia with a total of 90 experts. In these workshops, the experts discussed the challenges and opportunities each scenario presents for Australia’s freight and supply chains. They proposed ways for Australia to be successful in each scenario, and compared notes on suggestions that worked well across multiple scenarios.</p>
<p>We collected more than 15,000 words’ worth of handwritten expert recommendations in our four workshops. We transcribed and analysed all of them, and prepared a complete summary of the most frequent and robust ideas. These are included in our project’s <a href="https://infrastructure.gov.au/transport/freight/freight-supply-chain-priorities/research-papers/files/Scenario_planning_report.pdf">final report</a>.</p>
<h2>So what do the experts recommend?</h2>
<p>In the experts’ recommendations, it is easy to identify three major themes that are common to all four scenarios.</p>
<p>The first is the ever-growing importance of data. For Australia to be successful in <em>any</em> of the futures we envisioned, large amounts of relevant, timely and reliable data must be gathered and shared. This will require open and common data standards to be developed. The need to protect confidentiality will have to be balanced with the need to share data.</p>
<p>The second major theme is the need to educate for the future. Training in robotics, automation, artificial intelligence (AI) and data analysis should be widely available. A focus on science, technology, engineering and maths (STEM) should start in Year 1. Workers who are displaced by new technologies should be retrained, so they can re-enter the workforce.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/coming-soon-to-a-highway-near-you-truck-platooning-87748">Coming soon to a highway near you: truck platooning</a>
</strong>
</em>
</p>
<hr>
<p>The third major theme is the need to rethink regulation. For Australia to be successful in any of the futures we explored, it is necessary to simplify, standardise and harmonise regulations across levels of government and geographies. Regulations, and the process to create them, must become more flexible and agile, so as to promote innovation.</p>
<p>There are other robust recommendations that, according to the experts, are necessary in all four scenarios. </p>
<p>One is to make exports a strategic priority of national importance. Making exports faster and easier was recommended. </p>
<p>Another is the need for cities to include logistics in their plans from the start, not as an afterthought.</p>
<p>The many insights obtained in our project are informing the freight and supply chain strategy that the Australian government is creating. These will help those making long-term decisions to avoid future surprises that might not have been anticipated without a systematic examination of the many possible futures before us.</p><img src="https://counter.theconversation.com/content/98249/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The research project described in this article was funded by the Australian Government's Department of Infrastructure, Regional Development and Cities.</span></em></p>Supply-chain experts see reliable data, STEM education and smarter regulation as essential for Australia to succeed in an increasingly automated world under pressure to be environmentally sustainable.Roberto Perez-Franco, Senior Research Fellow – Supply Chain Strategy, Deakin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/884442018-01-03T11:25:06Z2018-01-03T11:25:06ZTo get the most out of self-driving cars, tap the brakes on their rollout<figure><img src="https://images.theconversation.com/files/199509/original/file-20171215-17848-z746fp.jpg?ixlib=rb-1.1.0&rect=700%2C1437%2C3664%2C2604&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It would be better if people weren't afraid of self-driving cars.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/self-driving-car-scared-human-face-289636868">mato181/Shutterstock.com</a></span></figcaption></figure><p>Every day about <a href="https://www.nhtsa.gov/press-releases/us-dot-releases-new-automated-driving-systems-guidance">100 people</a> die in car crashes on U.S. roads. That death toll is a major reason why both <a href="https://energycommerce.house.gov/selfdrive">Congress</a> and the <a href="https://www.nhtsa.gov/press-releases/us-dot-releases-new-automated-driving-systems-guidance">Trump administration</a> are backing automotive efforts to develop and deploy self-driving cars as quickly as possible.</p>
<p>However, officials’ eagerness far exceeds the degree to which the public views this as a serious concern, and overestimates the public’s willingness to see its driving patterns radically altered. As those of us involved in studies of technology and society have come to understand, foisting a technical fix on a <a href="http://www.newsweek.com/americans-want-humans-driverless-cars-congress-doesnt-678641">skeptical public</a> can lead to a backlash that sets back the cause indefinitely. The backlash over nuclear power and genetically modified organisms are exemplary of the problems that arise from rushing technology in the face of public fears. Public safety on the roads is too important to chance consumer backlash.</p>
<p>I recommend industry, government and consumers take a more measured and incremental approach to full autonomy. Initially emphasizing technologies that can assist human drivers – rather than the abilities of cars to drive themselves – will somewhat delay the day all those lives are saved on U.S. roads. But it will start saving some lives right away, and is more likely to avoid mass rejection of the new technology.</p>
<h2>Not so fast</h2>
<p>Most Americans are <a href="https://www.aaafoundation.org/sites/default/files/2012TrafficSafetyCultureIndex.pdf">indifferent</a> to what officials and safety advocates see as a serious problem. They <a href="https://www.sbnation.com/college-basketball/2017/12/13/16770632/university-of-evansville-aces-plane-crash">react in horror to the deaths</a> of even a few dozen passengers in a <a href="https://www.faa.gov/news/fact_sheets/news_story.cfm?newsId=21274">relatively infrequent airline crash</a> but think little about the 100 lives lost daily from driving. The rewards from driving, such as personal freedom and convenience, overwhelm fears. In fact, most people believe their driving skills are <a href="http://www.psychologicalscience.org/news/when-it-comes-to-driving-most-people-think-their-skills-are-above-average.html">better than average</a>, making them more likely to think they’ll avoid the tragedies that befall others.</p>
<p>As a result, the push for autonomous driving on the basis of improved safety is a solution to a situation the public doesn’t consider a serious problem. We know from the studies of <a href="http://changingminds.org/explanations/meaning/ten_risk-perception_factors.html">psychologist Paul Slovic</a> that the public is very uncomfortable with novel technologies that cede human control to machines. This is particularly true, in a phenomenon called “<a href="https://www.caranddriver.com/features/autonomous-cars-how-safe-is-safe-enough-feature">betrayal aversion</a>,” when the benefits of technologies are overpromised and reality doesn’t appear to be consistent with those expectations. Unless self-driving cars can dramatically reduce fatalities, the public may remain skeptical.</p>
<h2>Serious safety concerns</h2>
<p>Surveys show the American public is far from sold on the safety benefits of autonomous vehicles. A recent survey by the Pew Research Center revealed that <a href="http://www.pewinternet.org/2017/10/04/americans-attitudes-toward-driverless-vehicle">more than half of the American public</a> would be worried about riding in an autonomous vehicle due to concerns over safety and the lack of control. </p>
<p><a href="http://www.pewinternet.org/2017/10/04/automation-in-everyday-life/pi_2017-10-04_automation_3-05/"><img width="420" height="671" src="http://assets.pewresearch.org/wp-content/uploads/sites/14/2017/10/03102012/PI_2017.10.04_Automation_3-05.png" class="attachment-large size-large" alt="Slight majority of Americans would not want to ride in a driverless vehicle if given the chance; safety concerns, lack of trust lead their list of concerns"></a></p>
<p>Another survey found that <a href="http://www.umich.edu/%7Eumtriswt/PDF/SWT-2016-8.pdf">only 15 percent of people</a> would prefer autonomous vehicles to traditional human-driven cars. It’s true that some groups (men, people with more education and people under 45) are less worried than others, but these differences of opinion are less significant than the overall public view. Aside from simply the fear of being in these vehicles without the option of control, much of the American public still relishes the joy of the driving experience.</p>
<p>Public fears may ease as people become familiar with self-driving cars, but this experience needs to be gained gradually over time. The mental chasm between having complete control over the vehicle to the complete absence of control is huge. <a href="http://www.consumerunion.org/wp-content/uploads/2017/10/cu-letter-on-AV-START-Act-for-Senate-markup-10-3-2017.pdf">Consumer advocates</a> are already warning public officials that federal laws and rules designed to hasten the movement to autonomy are too permissive, and risk triggering a public backlash.</p>
<p>A steady stream of crashes, both serious and minor, would simply reinforce public fears that self-driving cars are not safe. The media, sensitive to these fears, will be eager to cry betrayal when there is a contradiction between these accidents and the technology’s rationale. And politicians, wanting to be seen as protectors of public health, may promote a new “<a href="https://www.nytimes.com/2017/07/21/technology/self-driving-cars-washington-congress.html">Make America Drive Again</a>” movement.</p>
<p>To avoid public backlash or overreaction, industry and government should not rush, but rather move more deliberately toward deploying fully autonomous cars on U.S. roads. There is still much the industry can do in terms of cutting-edge technology to assist drivers. <a href="https://theconversation.com/some-of-the-best-parts-of-autonomous-vehicles-are-already-here-84029">Innovations such as adaptive cruise control</a> and automatic emergency braking already have <a href="http://www.consumereports.org/autonomous-driving/doubts-grow-over-fully-autonomous-car-tech/">considerable public support</a> and will work to acclimate the public to more advanced stages of driver autonomy.</p>
<p>Government and industry are right to continue inventing and innovating technologies that can contribute to autonomous vehicles. But rather than racing to get self-driving cars on U.S. roads, they should slow the rollout down to a pace the public can adjust to. That way, the safety benefits can be both real and long-lasting.</p><img src="https://counter.theconversation.com/content/88444/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jack Barkenbus does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>If government and industry overhype autonomous vehicles, the public may expect too much, be disappointed and reject the new technology.Jack Barkenbus, Visiting Scholar, Vanderbilt Institute for Energy & Environment, Vanderbilt UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/874192017-11-28T11:26:15Z2017-11-28T11:26:15ZRedefining ‘safety’ for self-driving cars<figure><img src="https://images.theconversation.com/files/196546/original/file-20171127-2042-1cag4bd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">When self-driving cars get in crashes, who's to blame?</span> <span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Self-Driving-Cars-Crash/26577b6f110145febea65428f03824e1/14/0">Santa Clara Valley Transportation Authority via AP</a></span></figcaption></figure><p>In early November, a <a href="https://www.theverge.com/2017/11/8/16626224/las-vegas-self-driving-shuttle-crash-accident-first-day">self-driving shuttle and a delivery truck collided in Las Vegas</a>. The event, in which no one was injured and no property was seriously damaged, attracted media and public attention in part because one of the vehicles was driving itself – and because that shuttle had been operating for only less than an hour before the crash.</p>
<p>It’s not the first collision involving a self-driving vehicle. Other crashes have involved <a href="https://www.usatoday.com/story/money/business/tech/2017/03/29/tempe-releases-police-report-uber-crash/99797486/">Ubers</a> <a href="https://www.azcentral.com/story/money/business/tech/2017/09/08/self-driving-uber-involved-tempe-crash/647843001/">in Arizona</a>, a <a href="https://www.nytimes.com/2016/07/01/business/self-driving-tesla-fatal-crash-investigation.html">Tesla in “autopilot” mode</a> in Florida and <a href="https://cleantechnica.com/2017/10/05/gms-self-driving-test-vehicles-involved-6-crashes-september/">several</a> <a href="http://bigthink.com/natalie-shoemaker/why-google-self-driving-car-crash-is-a-learning-experience">others</a> in California. But in <a href="https://www.theverge.com/2016/2/29/11134344/google-self-driving-car-crash-report">nearly every case</a>, it was human error, not the self-driving car, that caused the problem.</p>
<p>In Las Vegas, the self-driving shuttle noticed a truck up ahead was backing up, and stopped and waited for it to get out of the shuttle’s way. But the human truck driver didn’t see the shuttle, and kept backing up. As the truck got closer, the shuttle didn’t move – forward or back – so the truck grazed the shuttle’s front bumper.</p>
<p>As a researcher working on autonomous systems for the past decade, I find that this event raises a number of questions: Why didn’t the shuttle honk, or back up to avoid the approaching truck? Was stopping and not moving the safest procedure? If self-driving cars are to make the roads safer, the bigger question is: What should these vehicles do to reduce mishaps? <a href="https://engineering.tamu.edu/mechanical/people/saripalli">In my lab</a>, we are developing self-driving cars and shuttles. We’d like to solve the underlying safety challenge: Even when autonomous vehicles are doing everything they’re supposed to, the drivers of nearby cars and trucks are still flawed, error-prone humans.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/196535/original/file-20171127-2066-1ij19i4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/196535/original/file-20171127-2066-1ij19i4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/196535/original/file-20171127-2066-1ij19i4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=381&fit=crop&dpr=1 600w, https://images.theconversation.com/files/196535/original/file-20171127-2066-1ij19i4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=381&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/196535/original/file-20171127-2066-1ij19i4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=381&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/196535/original/file-20171127-2066-1ij19i4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=479&fit=crop&dpr=1 754w, https://images.theconversation.com/files/196535/original/file-20171127-2066-1ij19i4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=479&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/196535/original/file-20171127-2066-1ij19i4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=479&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The driver who was backing up a truck didn’t see a self-driving shuttle in his way.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Driverless-Shuttle-Vegas/4fa78e6eb7e84c87a9244ef596df0d40/1/0">Kathleen Jacob/KVVU-TV via AP</a></span>
</figcaption>
</figure>
<h2>How crashes happen</h2>
<p>There are two main causes for crashes involving autonomous vehicles. The first source of problems is when the sensors don’t detect what’s happening around the vehicle. Each sensor has its quirks: GPS works only with a clear view of the sky; cameras work with enough light; lidar can’t work in fog; and radar is not particularly accurate. There may not be another sensor with different capabilities to take over. It’s not clear what the ideal set of sensors is for an autonomous vehicle – and, with both cost and computing power as limiting factors, the solution can’t be just adding more and more. </p>
<p>The second major problem happens when the vehicle encounters a situation that the people who wrote its software didn’t plan for – like having a truck driver not see the shuttle and back up into it. Just like human drivers, self-driving systems have to make hundreds of decisions every second, adjusting for new information coming in from the environment. When a self-driving car experiences something it’s not programmed to handle, it typically stops or pulls over to the roadside and waits for the situation to change. The shuttle in Las Vegas was presumably waiting for the truck to get out of the way before proceeding – but the truck kept getting closer. The shuttle may not have been programmed to honk or back up in situations like that – or may not have had room to back up. </p>
<p>The challenge for designers and programmers is combining the information from all the sensors to create an accurate representation – a computerized model – of the space around the vehicle. Then the software can interpret the representation to help the vehicle navigate and interact with whatever might be happening nearby. If the system’s perception isn’t good enough, the vehicle can’t make a good decision. The main cause of the fatal Tesla crash was that <a href="http://www.latimes.com/business/autos/la-fi-hy-tesla-crash-20170620-story.html">the car’s sensors couldn’t tell the difference</a> between the bright sky and a large white truck crossing in front of the car.</p>
<p>If autonomous vehicles are to fulfill humans’ <a href="https://theconversation.com/some-of-the-best-parts-of-autonomous-vehicles-are-already-here-84029">expectations of reducing crashes</a>, it won’t be enough for them to drive safely. They must also be the ultimate defensive driver, ready to react when others nearby drive unsafely. An Uber crash in Tempe, Arizona, in March 2017 is an example of this.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/196544/original/file-20171127-2009-g6d076.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/196544/original/file-20171127-2009-g6d076.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/196544/original/file-20171127-2009-g6d076.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=442&fit=crop&dpr=1 600w, https://images.theconversation.com/files/196544/original/file-20171127-2009-g6d076.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=442&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/196544/original/file-20171127-2009-g6d076.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=442&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/196544/original/file-20171127-2009-g6d076.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=555&fit=crop&dpr=1 754w, https://images.theconversation.com/files/196544/original/file-20171127-2009-g6d076.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=555&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/196544/original/file-20171127-2009-g6d076.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=555&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A human-driven car crashed into this Uber self-driving SUV, flipping it on its side.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Self-Driving-Cars-Arizona/0fd5dee09ccc496ea5a7e38ebf968ed8/1/0">Tempe Police Department via AP</a></span>
</figcaption>
</figure>
<p>According to media reports, in that incident, a <a href="https://www.usatoday.com/story/money/business/tech/2017/03/29/tempe-releases-police-report-uber-crash/99797486/">person in a Honda CRV</a> was driving on a major road near the center of Tempe. She wanted to turn left, across three lanes of oncoming traffic. She could see two of the three lanes were clogged with traffic and not moving. She could not see the farthest lane from her, in which an Uber was driving autonomously at 38 mph in a 40 mph zone. The Honda driver made the left turn and hit the Uber car as it entered the intersection. </p>
<p>A human driver in the Uber car approaching an intersection might have expected cars to be turning across its lane. A person might have noticed she couldn’t see if that was happening and slowed down, perhaps avoiding the crash entirely. An autonomous car that’s safer than humans would have done the same – but the Uber wasn’t programmed to.</p>
<h2>Improving testing</h2>
<p>That Tempe crash and the more recent Las Vegas one are both examples of a vehicle not understanding the situation enough to determine the correct action. The vehicles were following the rules they’d been given, but they were not making sure their decisions were the safest ones. This is primarily because of the way most autonomous vehicles are tested.</p>
<p>The basic standard, of course, is whether self-driving cars can follow the rules of the road, obeying traffic lights and signs, knowing local laws about signaling lane changes, and otherwise behaving like a law-abiding driver. But that’s only the beginning.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/195283/original/file-20171118-11450-8kr9y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/195283/original/file-20171118-11450-8kr9y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/195283/original/file-20171118-11450-8kr9y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=336&fit=crop&dpr=1 600w, https://images.theconversation.com/files/195283/original/file-20171118-11450-8kr9y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=336&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/195283/original/file-20171118-11450-8kr9y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=336&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/195283/original/file-20171118-11450-8kr9y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=422&fit=crop&dpr=1 754w, https://images.theconversation.com/files/195283/original/file-20171118-11450-8kr9y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=422&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/195283/original/file-20171118-11450-8kr9y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=422&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Sensor arrays atop and along the bumpers of a research vehicle at Texas A&M.</span>
<span class="attribution"><a class="source" href="https://unmanned.tamu.edu/projects/">Swaroopa Saripalli</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Before autonomous vehicles can really hit the road, they need to be programmed with instructions about how to behave when other vehicles do something out of the ordinary. Testers need to consider other vehicles as adversaries, and develop plans for extreme situations. For instance, what should a car do if a truck is driving in the wrong direction? At the moment, self-driving cars might try to change lanes, but could end up stopping dead and waiting for the situation to improve. Of course, no human driver would do this: A person would take evasive action, even if it meant breaking a rule of the road, like switching lanes without signaling, driving onto the shoulder or even speeding up to avoid a crash.</p>
<p>Self-driving cars must be taught to understand not only what the surroundings are but the context: A car approaching from the front is not a danger if it’s in the other lane, but if it’s in the car’s own lane circumstances are entirely different. Car designers should test vehicles based on how well they perform difficult tasks, like parking in a crowded lot or changing lanes in a work zone. This may sound a lot like giving a human a driving test – and that’s exactly what it should be, if self-driving cars and people are to coexist safely on the roads.</p><img src="https://counter.theconversation.com/content/87419/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Srikanth Saripalli does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>If autonomous vehicles are going to be safer than human drivers, they’ll need to improve their ability to perceive and understand their surroundings – and become the ultimate defensive drivers.Srikanth Saripalli, Associate Professor in Mechanical Engineering, Texas A&M UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/864792017-10-30T02:18:22Z2017-10-30T02:18:22ZAn AI professor explains: three concerns about granting citizenship to robot Sophia<figure><img src="https://images.theconversation.com/files/192365/original/file-20171029-13367-1f42vke.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Citizen Sophia.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/itupictures/34328656564/">Flickr/AI for GOOD Global Summit</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>I was surprised to hear that a robot named Sophia was <a href="http://www.arabnews.com/node/1183166/saudi-arabia">granted citizenship</a> by the Kingdom of Saudi Arabia.</p>
<p>The announcement last week followed the Kingdom’s <a href="http://www.arabnews.com/node/1182501/saudi-arabia">commitment</a> of US$500 billion to build a new city powered by robotics and renewables. </p>
<p>One of the most honourable concepts for a human being, to be a citizen and all that brings with it, has been given to a machine. As a professor who works daily on making AI and autonomous systems more trustworthy, I don’t believe human society is ready yet for citizen robots.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-to-make-robots-that-we-can-trust-79525">How to make robots that we can trust</a>
</strong>
</em>
</p>
<hr>
<p>To grant a robot citizenship is a declaration of trust in a technology that I believe is not yet trustworthy. It brings social and ethical concerns that we as humans are not yet ready to manage.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/03QduDcu5wc?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Robot Sophia is officially a citizen of Saudi Arabia.</span></figcaption>
</figure>
<h2>Who is Sophia?</h2>
<p>Sophia is a robot developed by the Hong Kong-based company <a href="http://www.hansonrobotics.com/">Hanson Robotics</a>. Sophia has a female face that can display emotions. Sophia speaks English. Sophia makes jokes. You could have a reasonably intelligent conversation with Sophia.</p>
<p>Sophia’s creator is Dr David Hanson, a 2007 PhD graduate from the University of Texas.</p>
<p>Sophia is reminiscent of “Johnny 5”, the first robot to become a US citizen in the 1986 movie <a href="http://www.imdb.com/title/tt0091949/">Short Circuit</a>. But Johnny 5 was a mere idea, something dreamt up by comic science fiction writers S. S. Wilson and Brent Maddock.</p>
<p>Did the writers imagine that in around 30 years their fiction would become a reality?</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"923930694425960449"}"></div></p>
<h2>Risk to citizenship</h2>
<p>Citizenship – in my opinion, the most honourable status a country grants for its people – is facing an existential risk.</p>
<p>As a researcher who <a href="https://www.youtube.com/watch?v=ZcNKr9Anm0Q&list=PLS4rZ-CRtZpQsiRCyCEa_T8klJr1EI8dN&index=12">advocates</a> for designing autonomous systems that are trustworthy, I know the technology is not ready yet. </p>
<p>We have <a href="http://ieeexplore.ieee.org/document/7480763/">many challenges</a> that we need to overcome before we can <a href="http://rd.springer.com/article/10.1007/s12559-015-9365-5">truly trust these systems</a>. For example, we don’t yet have reliable mechanisms to assure us that these intelligent systems will always behave ethically and in accordance with our moral values, or to protect us against them taking a wrong action with catastrophic consequences.</p>
<p>Here are three reasons I think it is a premature decision to grant Sophia citizenship.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"923940629872304129"}"></div></p>
<h2>1. Defining identity</h2>
<p>Citizenship is granted to a unique identity. </p>
<p>Each of us, humans I mean, possesses a unique signature that distinguishes us from any other human. When we get through customs without talking to a human, our <a href="http://www.bbc.com/news/technology-38731016">identity is automatically established</a> using an image of our face, iris and fingerprint. My PhD student establishes <a href="http://ieeexplore.ieee.org/abstract/document/7906958/">human identity by analysing humans’ brain waves</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/we-all-need-to-forget-even-robots-81387">We all need to forget, even robots</a>
</strong>
</em>
</p>
<hr>
<p>What gives Sophia her identity? Her <a href="https://www.pcmag.com/encyclopedia/term/46422/mac-address">MAC address</a>? A barcode, a unique skin mark, an audio mark in her voice, an electromagnetic signature similar to human brain waves? </p>
<p>These and other technological identity management protocols are all possible, but they do not establish Sophia’s identity – they can only establish hardware identity. What then is Sophia’s identity?</p>
<p>To me, identity is a <a href="http://www.actforyouth.net/resources/n/n_identity-handout.pdf">multidimensional construct</a>. It sits at the intersection of who we are biologically, cognitively, and as defined by every experience, culture, and environment we encountered. It’s not clear where Sophia fits in this description.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"923935380264701953"}"></div></p>
<h2>2. Legal rights</h2>
<p>For the purposes of this article, let’s assume that Sophia the citizen robot is able to vote. But who is making the decision on voting day – Sophia or the manufacturer?</p>
<p>Presumably also Sophia the citizen is “liable” to pay income taxes because Sophia has a legal identity independent of its creator, the company.</p>
<p>Sophia must also have the right for equal protection similar to other citizens by law.</p>
<p>Consider this hypothetical scenario: a policeman sees Sophia and a woman each being attacked by a person. That policeman can only protect one of them: who should it be? Is it right if the policeman chooses Sophia because Sophia walks on wheels and has no skills for self-defence?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/artificial-intelligence-researchers-must-learn-ethics-82754">Artificial intelligence researchers must learn ethics</a>
</strong>
</em>
</p>
<hr>
<p>Today, the artificial intelligence (AI) community is still debating what principles should govern the design and use of AI, let alone what the laws should be. </p>
<p>The most recent list proposes 23 principles known as the <a href="https://futureoflife.org/ai-principles/">Asilomar AI Principles</a>. Examples of these include: Failure Transparency (ascertaining the cause if an AI system causes harm); Value Alignment (aligning the AI system’s goals with human values); and Recursive Self-Improvement (subjecting AI systems with abilities to self-replicate to strict safety and control measures).</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"924661692436541442"}"></div></p>
<h2>3. Social rights</h2>
<p>Let’s talk about relationships and reproduction.</p>
<p>As a citizen, will Sophia, the humanoid emotional robot, be allowed to “marry” or “breed” if Sophia chooses to? <a href="https://www.manufacturingtomorrow.com/news/2017/07/19/ndsu-students-develop-3d-printing-self-replicating-robot/10034/">Students from North Dakota State University</a> have taken steps to create a robot that self-replicates using 3D printing technologies.</p>
<p>If more robots join Sophia as citizens of the world, perhaps they too could claim their rights to self-replicate into other robots. These robots would also become citizens. With no resource constraints on how many children each of these robots could have, they could easily exceed the human population of a nation.</p>
<p>As voting citizens, these robots could create societal change. Laws might change, and suddenly humans could find themselves in a place they hadn’t imagined.</p><img src="https://counter.theconversation.com/content/86479/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Hussein Abbass receives funding from the Australian Research Council.</span></em></p>An expert in artificial intelligence believes we’re not ready for the challenges posed by Saudi Arabia granting a robot citizenship. Key questions about robot identity and rights remain unanswered.Hussein Abbass, Professor, School of Engineering & IT, UNSW-Canberra, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/841782017-10-03T10:11:35Z2017-10-03T10:11:35ZGovernments, car companies must resolve their competing goals for self-driving cars<figure><img src="https://images.theconversation.com/files/186422/original/file-20170918-8300-eu875i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">When will cars be able to talk to their surroundings?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/self-driving-electronic-computer-car-on-559597045">posteriori/Shutterstock.com</a></span></figcaption></figure><p>What self-driving cars want, and what people want from them, varies widely. And often these desires are at odds with each other. For instance, carmakers – and the designers of the software that will run autonomous vehicles – know that it’s safest if cars stay far away from each other. But traffic engineers know that if every car operated to ensure lots of surrounding space, local roads and highways alike would be clogged for miles, and nobody would get anywhere.</p>
<p>Another inherent conflict involves how cars handle crises. No consumer wants to buy a self-driving car that’s programmed, even in the most remote of circumstances, <a href="http://dx.doi.org/10.1126/science.aaf2654">to kill its driver</a> instead of someone else (even if it would <a href="http://moralmachine.mit.edu/">save a class of kindergarteners</a> or a group of Nobel Prize winners). However, if every car is programmed always to save its occupants at any cost, <a href="https://www.wired.com/2016/06/self-driving-cars-will-power-kill-wont-conscience/">pedestrians and cyclists</a> are at risk. </p>
<p>As federal regulations for self-driving cars advance in <a href="http://www.post-gazette.com/business/tech-news/2017/09/06/The-SELF-DRIVE-Act-just-passed-the-U-S-House-here-s-what-that-means-for-autonomous-vehicles/stories/201709060138">congressional votes</a> and the <a href="https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13069a-ads2.0_090617_v9a_tag.pdf">U.S. Department of Transportation issues guidelines</a>, an important part of real progress will be how everyone involved approaches those inherent conflicts. Research at the <a href="http://www.transportation.institute.ufl.edu/">University of Florida Transportation Institute</a>, where I serve as the director, shows that the key to resolving these competitions of goals is communication among all the elements of the transportation network – cars, pedestrians, bicycles, guardrails, traffic lights, stop signs, roadways themselves and everything else. And if they’re all going to talk to each other, the people who make all those things need to talk to each other too. </p>
<p>Our institute is providing opportunities to do that. Our efforts include working with the Florida Department of Transportation and the City of Gainesville to <a href="https://www.demandstar.com/supplier/bids/Bid_Detail.asp?_PU=%2Fsupplier%2Fbids%2Fagency_inc%2Fbid_list%2Easp%3F_RF%3D1%26f%3Dsearch%26mi%3D10071&LP=BB&BI=331953">set up an autonomous shuttle</a> between the UF campus and downtown Gainesville and partnering with industry to create a <a href="http://www.transportation.institute.ufl.edu/research-2/istreet/">testing area for autonomous cars and other advanced transportation technologies</a> on campus roads and surrounding highways. But with so little coordination in today’s transportation world, there’s a long way to go.</p>
<h2>Problems large and small</h2>
<p>The road system in the U.S. has serious problems. Americans spend <a href="http://www.reuters.com/article/us-usa-traffic-study/u-s-commuters-spend-about-42-hours-a-year-stuck-in-traffic-jams-idUSKCN0QV0A820150826">more than 40 hours per year</a> stuck in traffic; <a href="https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812412">more than 30,000 people die</a> each year in crashes on U.S. roads, making cars <a href="http://www.worldlifeexpectancy.com/usa-cause-of-death-by-age-and-gender">one of the leading causes of death</a> for Americans under the age of 64. These are serious problems, and <a href="http://www.automatedvehiclessymposium.org/home">many hope</a> that autonomous cars can help solve them.</p>
<p>Nationwide statistics can mask smaller issues, though. The country’s transportation system is full of examples where coordination and collaboration would be extremely helpful., and even where the individual components may work but the system overall isn’t as streamlined as it could be.</p>
<p>Many communities have major roads where <a href="http://www.twincities.com/2017/09/09/minnesota-36-commuters-to-get-20-more-seconds-of-green-and-whos-using-new-st-croix-bridge/">drivers have to stop unnecessarily</a> because <a href="http://news.mit.edu/2014/traffic-lights-theres-a-better-way-0707">traffic lights aren’t coordinated</a> among the different towns drivers pass through. And because different government agencies operate highways and local roads, when emergencies happen, drivers aren’t always <a href="https://waldo.villagesoup.com/p/officials-debrief-aug-2-traffic-nightmare/1684341">rerouted smoothly</a> or efficiently.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/186421/original/file-20170918-8236-4rojqf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/186421/original/file-20170918-8236-4rojqf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/186421/original/file-20170918-8236-4rojqf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/186421/original/file-20170918-8236-4rojqf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/186421/original/file-20170918-8236-4rojqf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/186421/original/file-20170918-8236-4rojqf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/186421/original/file-20170918-8236-4rojqf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/186421/original/file-20170918-8236-4rojqf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The city of Atlanta is one of many communities – including Gainesville, Florida – exploring the technology and effects of self-driving cars.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Self-Driving-Cars/22319764d6d64e0286906b6d4eef15f2/2/0">AP Photo/Johnny Clark</a></span>
</figcaption>
</figure>
<h2>Making a place for testing</h2>
<p>With the Florida Department of Transportation and the city of Gainesville, our institute is building what we’re calling <a href="http://www.transportation.institute.ufl.edu/research-2/istreet/">I-STREET, a testing infrastructure</a>
for autonomous vehicles and related technologies. As new components such as sensors and other monitoring equipment are installed on roads and highways in and around the university’s campus, researchers will be able to evaluate a range of advanced technologies. For instance, we’ll use cars that can talk to the other elements of the system, including each other, and can drive themselves on roads equipped with sensors to monitor traffic conditions.</p>
<p>In preliminary simulations, we have found real savings in travel time with self-driving vehicles that can communicate with their surroundings and adjust their paths on the go. For example, when self-driving cars and traffic lights can talk to each other, they can adjust cars’ speeds and the timing of red and green lights to help every car move more smoothly. Depending on the level of traffic and the number of self-driving cars mixed into human-driven traffic, travel times can <a href="https://doi.org/10.1016/j.trc.2014.10.001">drop by 16 to 36 percent</a>, which may save nearly a minute of delay per car.</p>
<p>On highways, a <a href="https://ops.fhwa.dot.gov/publications/fhwahop14020/sec1.htm">major bottleneck happens around on-ramps</a>, where entering vehicles may have trouble finding openings in fast-moving traffic. When frustrated drivers force their way onto the road, nearby cars must brake abruptly and <a href="http://www.baltimoresun.com/news/breaking/bs-md-co-accident-death-20111228-story.html">may even crash</a>. I helped <a href="https://doi.org/10.1016/j.trc.2017.04.015">develop an algorithm</a> that uses information from self-driving vehicles to plan optimal paths for them. It can tell the cars already on the highway to move to the leftmost lane, making room for entering vehicles. Our simulations show that everyone’s collective travel time can be reduced by as much as 35 percent for the area around the on-ramp, or about 40 seconds per vehicle when traffic is heavy.</p>
<p>This type of intercar communication, coupled with the involvement of road sensors on the highway and in the on-ramp, can be built only if governments, contractors and international car manufacturers work together. That can ensure not only that individual vehicles are safe but also that the entire traffic system functions efficiently.</p><img src="https://counter.theconversation.com/content/84178/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lily Elefteriadou receives funding from Florida Department of Transportation, US DOT, NSF, and the National Cooperative Highway Research Program. She is affiliated with the American Society of Civil Engineers (ASCE), the Institute of Transportation Engineers (ITE), the Transportation Research Board, ITS America, and the Women in Transportation Seminar (WTS). She works for the University of Florida. </span></em></p>If all the elements in the transportation system are going to talk to each other, the people at the companies and government agencies that make those items need to talk to each other too.Lily Elefteriadou, Professor of Civil Engineering; Director of University of Florida Transportation Institute, University of FloridaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/827542017-08-29T04:23:04Z2017-08-29T04:23:04ZArtificial intelligence researchers must learn ethics<p>Scientists who build artificial intelligence and autonomous systems need a strong ethical understanding of the impact their work could have.</p>
<p>More than 100 technology pioneers recently published an <a href="https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war">open letter to the United Nations</a> on the topic of lethal autonomous weapons, or “killer robots”. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-to-make-robots-that-we-can-trust-79525">How to make robots that we can trust</a>
</strong>
</em>
</p>
<hr>
<p>These people, including the entrepreneur Elon Musk and the founders of several robotics companies, are part of an effort that <a href="https://futureoflife.org/open-letter-autonomous-weapons/">began in 2015</a>. The original letter called for an end to an arms race that it claimed could be the “third revolution in warfare, after gunpowder and nuclear arms”.</p>
<p>The UN has a role to play, but responsibility for the future of these systems also needs to begin in the lab. The education system that trains our AI researchers needs to school them in ethics as well as coding.</p>
<h2>Autonomy in AI</h2>
<p>Autonomous systems can make decisions for themselves, with little to no input from humans. This greatly increases the usefulness of robots and similar devices. </p>
<p>For example, an autonomous delivery drone only requires the delivery address, and can then work out for itself the best route to take – overcoming any obstacles that it may encounter along the way, such as adverse weather or a flock of curious seagulls.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=334&fit=crop&dpr=1 600w, https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=334&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=334&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=420&fit=crop&dpr=1 754w, https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=420&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/183590/original/file-20170828-1533-tno6h7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=420&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Drones deliver more than just food.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/routeplanning/33751810990/in/photolist-TqwRvG-TP22fV-TqwRE9-TqwRQQ-9NvFDv-TWXUMN-TqwR13-TqwRjQ-TqwQF5-TqwRaS-TqwQyS-nxAbfy-q3FsGo-qkeRYv-q7bQta-STZJcB-9ND8dE-STZJGK-U8xYB1-TzWW3F-S66Moi-T69yr1-S66MyD-VADYoe-S66MKa-S66MUZ-S66MRT-oqrUG8-nxYm9s-j5xuXW-ni9oXC-4EX5m8-pouax8-dBJHYu-ikvXgY-uU2e1x-8mpYBw-B9vcqD-5Y9TAs-S66MH6-S66MDZ-v5z1pV-VscVKh-JXgHJQ-T1EWSV-RBtfBF-xRuSuf-xUtfyM-L5XAhB-xSXuwU">www.routexl.com</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA</a></span>
</figcaption>
</figure>
<p>There has been a great deal of research into autonomous systems, and delivery drones are currently being developed by companies such as <a href="https://thenextweb.com/tech/2017/08/24/amazon-patent-details-the-scary-future-of-drone-delivery/#.tnw_1oUtjT67">Amazon</a>. Clearly, the same technology could easily be used to make deliveries that are significantly nastier than food or books. </p>
<p>Drones are also becoming smaller, cheaper and more robust, which means it will soon be feasible for flying armies of thousands of drones to be manufactured and deployed. </p>
<p>The potential for the deployment of weapons systems like this, largely decoupled from human control, prompted the letter urging the UN to “find a way to protect us all from these dangers”.</p>
<h2>Ethics and reasoning</h2>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=901&fit=crop&dpr=1 600w, https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=901&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=901&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1133&fit=crop&dpr=1 754w, https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1133&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/183576/original/file-20170828-17112-yfyw9a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1133&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Thomas Aquinas.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/File:Carlo_Crivelli_007.jpg">Wikipedia Commons</a></span>
</figcaption>
</figure>
<p>Whatever your opinion of such weapons systems, the issue highlights the need for consideration of ethical issues in AI research. </p>
<p>As in most areas of science, acquiring the necessary depth to make contributions to the world’s knowledge requires focusing on a specific topic. Often researchers are experts in relatively narrow areas, and may lack any formal training in ethics or moral reasoning. </p>
<p>It is precisely this kind of reasoning that is increasingly required. For example, driverless cars, which are <a href="http://fortune.com/2017/01/20/self-driving-test-sites/">being tested in the US</a>, will need to be able to make judgements about potentially dangerous situations.</p>
<p>For instance, how should it react if a cat unexpectedly crosses the road? Is it better to run over the cat, or to swerve sharply to avoid it, risking injury to the car’s occupants? </p>
<p>Hopefully such cases will be rare, but the car will need to be designed with some specific principles in mind to guide its decision making. As Virginia Dignum put it when delivering her paper “<a href="https://www.ijcai.org/proceedings/2017/0655.pdf">Responsible Autonomy</a>” at the recent International Joint Conference on Artificial Intelligence (<a href="https://ijcai-17.org/">IJCAI</a>) in Melbourne: </p>
<blockquote>
<p>The driverless car will have ethics; the question is whose? </p>
</blockquote>
<p>A similar theme was explored in the paper “<a href="https://www.ijcai.org/proceedings/2017/0658.pdf">Automating the Doctrine of Double Effect</a>” by Naveen Sundar Govindarajulu and Selmer Bringsjord. </p>
<p>The <a href="https://plato.stanford.edu/entries/double-effect/">Doctrine of Double Effect</a> is a means of reasoning about moral issues, such as the right to self-defence under particular circumstances, and is credited to the 13th-century Catholic scholar <a href="http://www.iep.utm.edu/aquinas/">Thomas Aquinas</a>. </p>
<p>The name Double Effect comes from obtaining a good effect (such as saving someone’s life) as well as a bad effect (harming someone else in the process). This is a way to justify actions such as a drone shooting at a car that is running down pedestrians.</p>
<h2>What does this mean for education?</h2>
<p>The emergence of ethics as a topic for discussion in AI research suggests that we should also consider how we prepare students for a world in which autonomous systems are increasingly common. </p>
<p>The need for “<a href="http://stemfoundation.org.uk/asset/resource/%7B3EA5228A-B620-4783-AE91-190F2C182DAA%7D/resource.pdf">T-shaped</a>” people has been recently established. Companies are now looking for graduates not just with a specific area of technical depth (the vertical stroke of the T), but also with professional skills and personal qualities (the horizontal stroke). Combined, they are able to see problems from different perspectives and work effectively in multidisciplinary teams. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=565&fit=crop&dpr=1 754w, https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=565&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/183594/original/file-20170828-1549-1gla2gv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=565&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A Google self-driving car.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/romanboed/9572198632/in/photolist-fzS2as-dAtLrm-8XUCAY-ohAYe7-dW5TrG-eBWbNx-pehkzt-oQLPXk-bGdwo-ohhjHg-iKrUr5-bGdwn-fCyoaC-izMgC2-aMbnU2-g4P8w-qGxMtu-6rdujU-oSLAA2-eFeCPe-hRrw8M-aeaaQy-ENwDQj-wpZxdf-z62NVA-o7T6qb-dUQ5qV-9Fiind-7zWzs-embVPp-oi47W2-a8KbPa-QAXtnR-qPTpog-dUFy7q-druBYw-6NKSvq-92iCEB-4SewKg-rntpFL-9JwsGh-VSuL1f-9o1FrD-eb2sof-aUXTD8-WpavKu-csawD3-zdgRMN-RgKb9k-a7mSqE">Roman Boed</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>Most undergraduate courses in computer science and similar disciplines include a course on professional ethics and practice. These are usually focused on intellectual property, copyright, patents and privacy issues, which are certainly important. </p>
<p>However, it seems clear from the discussions at IJCAI that there is an emerging need for additional material on broader ethical issues. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/never-mind-killer-robots-even-the-good-ones-are-scarily-unpredictable-82963">Never mind killer robots – even the good ones are scarily unpredictable</a>
</strong>
</em>
</p>
<hr>
<p>Topics could include methods for determining the lesser of two evils, legal concepts such as criminal negligence, and the historical effect of technology on society.</p>
<p>The key point is to enable graduates to integrate ethical and societal perspectives into their work from the very beginning. It also seems appropriate to require research proposals to demonstrate how ethical considerations have been incorporated. </p>
<p>As AI becomes more widely and deeply embedded in everyday life, it is imperative that technologists understand the society in which they live and the effect their inventions may have on it.</p><img src="https://counter.theconversation.com/content/82754/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James Harland does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Technologists need to understand the society in which they live, and the effect their inventions could have on it.James Harland, Associate Professor in Computational Logic, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/795252017-08-29T01:25:30Z2017-08-29T01:25:30ZHow to make robots that we can trust<figure><img src="https://images.theconversation.com/files/183550/original/file-20170828-27807-1r3468x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Can we trust a robot that makes decisions with real-world consequences?
</span> <span class="attribution"><span class="source">from www.shutterstock.com</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>Self-driving cars, personal assistants, cleaning robots, smart homes - these are just some examples of autonomous systems.</p>
<p>With many such systems already in use or under development, a key question concerns trust. My central argument is that having trustworthy, well-working systems is not enough. To enable trust, the design of autonomous systems also needs to consider other requirements, including a capacity to explain decisions and to have recourse options when things go wrong.</p>
<h2>When doing a good job is not enough</h2>
<p>The past few years have seen dramatic advances in the deployment of autonomous systems. These are essentially software systems that make decisions and act on them, with real-world consequences. Examples include physical systems such as self-driving cars and robots, and software-only applications such as personal assistants. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/driverless-cars-could-see-humankind-sprawl-ever-further-into-the-countryside-83028">Driverless cars could see humankind sprawl ever further into the countryside</a>
</strong>
</em>
</p>
<hr>
<p>However, it is not enough to engineer autonomous systems that function well. We also need to consider what additional features people need to trust such systems. </p>
<p>For example, consider a personal assistant. Suppose the personal assistant functions well. Would you trust it, even if it could not explain its decisions?</p>
<p>To make a system trustable we need to identify the key prerequisites to trust. Then, we need to ensure that the system is designed to incorporate these features. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/183553/original/file-20170828-27807-spwliv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/183553/original/file-20170828-27807-spwliv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/183553/original/file-20170828-27807-spwliv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/183553/original/file-20170828-27807-spwliv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/183553/original/file-20170828-27807-spwliv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/183553/original/file-20170828-27807-spwliv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/183553/original/file-20170828-27807-spwliv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A trustworthy robot may need to be able to explain its decisions.</span>
<span class="attribution"><span class="source">from shutterstock.com</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>What makes us trust?</h2>
<p>Ideally, we would answer this question using experiments. We could ask people whether they would be willing to trust an autonomous system. And we could explore how this depends on various factors. For instance, is providing guarantees about the system’s behaviour important? Is providing explanations important? </p>
<p>Suppose the system makes decisions that are critical to get right, for example, self-driving cars avoiding accidents. To what extent are we more cautious in trusting a system that makes such critical decisions?</p>
<p>These experiments have not yet been performed. The prerequisites discussed below are therefore effectively educated guesses. </p>
<h2>Please explain</h2>
<p>Firstly, a system should be able to explain why it made certain decisions. Explanations are especially important if the system’s behaviour can be non-obvious, but still correct. </p>
<p>For example, imagine software that coordinates disaster relief operations by assigning tasks and locations to rescuers. Such a system may propose task allocations that appear odd to an individual rescuer, but are correct from the perspective of the overall rescue operation. Without explanations, such task allocations are unlikely to be trusted. </p>
<p>Providing explanations allows people to <a href="https://theconversation.com/can-i-trust-my-robot-and-should-my-robot-trust-me-55553">understand the systems</a> and can <a href="https://theconversation.com/finding-trust-and-understanding-in-autonomous-technologies-70245">support trust</a> in unpredictable systems and unexpected decisions. These explanations need to be comprehensible and accessible, perhaps <a href="https://insights.sei.cmu.edu/sei_blog/2016/12/why-did-the-robot-do-that.html">using natural language</a>. They could be interactive, taking the form of a conversation. </p>
<h2>If things go wrong</h2>
<p>A second prerequisite for trust is recourse. This means having a way to be compensated, if you are adversely affected by an autonomous system. This is a necessary prerequisite because it allows us to trust a system that isn’t 100% perfect. And in practice, no system is perfect. </p>
<p>The recourse mechanism could be legal, or a form of insurance, perhaps modelled on New Zealand’s approach to <a href="http://www.health.govt.nz/new-zealand-health-system/publicly-funded-health-and-disability-services/accident-cover">accident compensation</a>. </p>
<p>However, relying on a legal mechanism has problems. At least some autonomous systems will be manufactured by large multinationals. A legal mechanism could turn into a David versus Goliath situation, since it involves individuals, or resource-limited organisations, taking multinational companies to court. </p>
<p>More broadly, trustability also requires social structures for regulation and governance. For example, what (inter)national laws should be enacted to regulate autonomous system development and deployment? What <a href="https://theconversation.com/we-must-be-sure-that-robot-ai-will-make-the-right-decisions-at-least-as-often-as-humans-do-34985">certification should be required</a> before a self-driving car is allowed on the road? </p>
<p>It has been argued that certification, and trust, <a href="https://theconversation.com/if-you-want-to-trust-a-robot-look-at-how-it-makes-decisions-24134">require verification</a>. Specifically, this means using mathematical techniques to provide guarantees regarding the decision making of autonomous systems. For example, guaranteeing that a car will never accelerate when it knows another car is directly ahead. </p>
<h2>Incorporating human values</h2>
<p>For some domains the system’s decision making process should take into account relevant human values. These may include privacy, human autonomy and safety. </p>
<p>Imagine a <a href="http://www.ifaamas.org/Proceedings/aamas2015/aamas/p1201.pdf">system that takes care of an aged person with dementia</a>. The elderly person wants to go for a walk. However, for safety reasons they should not be permitted to leave the house alone. Should the system allow them to leave? Prevent them from leaving? Inform someone? </p>
<p>Deciding how best to respond may require consideration of relevant underlying human values. Perhaps in this scenario safety overrides autonomy, but informing a human carer or relative is possible. Although the choice of who to inform may be constrained by privacy.</p>
<h2>Making autonomous smarter</h2>
<p>These prerequisites – explanations, recourse and humans values – are needed to build trustable autonomous systems. They need to be considered as part of the design process. This would allow appropriate functionalities to be engineered into the system. </p>
<p>Addressing these prerequisites requires interdisciplinary collaboration. For instance, developing appropriate explanation mechanisms requires not just computer science but human psychology. Similarly, developing software that can take into account human values requires philosophy and sociology. And questions of governance and certification involve law and ethics.</p>
<p>Finally, there are broader questions. Firstly, what decisions we are willing to hand over to software? Secondly, how society should prepare and respond to the multitude of consequences that will come with the deployment of automated systems. </p>
<p>For instance, considering the <a href="https://theconversation.com/dont-be-alarmed-ai-wont-leave-half-the-world-unemployed-54958">impact on employment</a>, should society respond by introducing some form of <a href="https://theconversation.com/are-we-ready-for-robotopia-when-robots-replace-the-human-workforce-63653">Universal Basic Income</a>?</p><img src="https://counter.theconversation.com/content/79525/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael Winikoff has received funding from the ARC. </span></em></p>We are witnessing dramatic advances in the deployment of autonomous systems, but are we designing robots that can be trusted?Michael Winikoff, Professor in Information Science, University of OtagoLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/820352017-08-28T02:46:09Z2017-08-28T02:46:09ZArtificial intelligence cyber attacks are coming – but what does that mean?<figure><img src="https://images.theconversation.com/files/182112/original/file-20170815-18355-4q1mez.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Hackers will start to get help from robots and artificial intelligence soon.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/artificial-intelligence-hand-type-on-keyboard-539711077">Jinning Li/Shutterstock.com</a></span></figcaption></figure><p>The next major cyberattack could involve artificial intelligence systems. It could even happen soon: At a recent cybersecurity conference, 62 industry professionals, <a href="https://www.cylance.com/en_us/blog/black-hat-attendees-see-ai-as-double-edged-sword.html">out of the 100 questioned</a>, said they thought the first AI-enhanced cyberattack could come in the next 12 months.</p>
<p>This doesn’t mean robots will be marching down Main Street. Rather, artificial intelligence will make existing cyberattack efforts – things like identity theft, denial-of-service attacks and password cracking – more powerful and more efficient. This is dangerous enough – this type of hacking can steal money, <a href="https://www.equifax.com/assets/PSOL/15-9814_psol_emotionalToll_wp.pdf">cause emotional harm</a> and even <a href="https://www.wired.com/2016/08/jeep-hackers-return-high-speed-steering-acceleration-hacks/">injure or kill people</a>. Larger attacks can <a href="https://doi.org/10.1109/JPROC.2011.2165269">cut power</a> to <a href="http://dx.doi.org/10.1111/risa.12844">hundreds of thousands of people</a>, <a href="https://theconversation.com/the-petya-ransomware-attack-shows-how-many-people-still-dont-install-software-updates-77667">shut down hospitals</a> and even <a href="http://dx.doi.org/10.1111/risa.12844">affect national security</a>. </p>
<p>As a scholar who has <a href="https://doi.org/10.1016/j.techsoc.2013.12.004">studied AI decision-making</a>, I can tell you that interpreting human actions is still difficult for AI’s and that humans <a href="https://theconversation.com/finding-trust-and-understanding-in-autonomous-technologies-70245">don’t really trust AI systems</a> to make major decisions. So, unlike in the movies, the capabilities AI could bring to cyberattacks – and cyberdefense – are not likely to immediately involve computers choosing targets and attacking them on their own. People will still have to create attack AI systems, and launch them at particular targets. But nevertheless, adding AI to today’s cybercrime and cybersecurity world will <a href="https://doi.org/10.1016/j.techsoc.2015.12.003">escalate</a> what is already a rapidly changing arms race between attackers and defenders.</p>
<h2>Faster attacks</h2>
<p>Beyond computers’ lack of need for food and sleep – needs that limit human hackers’ efforts, even when they work in teams – automation can make complex attacks much faster and more effective. </p>
<p>To date, the effects of automation have been limited. Very rudimentary AI-like capabilities have for decades given virus programs <a href="https://www.cisco.com/c/en/us/about/security-center/virus-differences.html">the ability to self-replicate</a>, spreading from computer to computer without specific human instructions. In addition, programmers have used their skills to automate different elements of hacking efforts. Distributed attacks, for example, involve triggering a remote program on several computers or devices to overwhelm servers. The attack that <a href="https://www.welivesecurity.com/2016/10/24/10-things-know-october-21-iot-ddos-attacks/">shut down large sections of the internet in October 2016</a> used this type of approach. In some cases, common attacks are made available as a script that allows an unsophisticated user to choose a target and launch an attack against it.</p>
<p>AI, however, could help human cybercriminals customize attacks. <a href="https://theconversation.com/spearphishing-roiled-the-presidential-campaign-heres-how-to-protect-yourself-68274">Spearphishing attacks</a>, for instance, require attackers to have personal information about prospective targets, details like where they bank or what medical insurance company they use. AI systems can help gather, organize and process large databases to connect identifying information, making this type of attack easier and faster to carry out. That reduced workload may drive thieves to launch lots of smaller attacks that go unnoticed for a long period of time – if detected at all – due to their more limited impact.</p>
<p>AI systems could even be used to pull information together from multiple sources to identify people who would be particularly vulnerable to attack. Someone who is hospitalized or in a nursing home, for example, might not notice money missing out of their account until long after the thief has gotten away.</p>
<h2>Improved adaptation</h2>
<p>AI-enabled attackers will also be much faster to react when they encounter resistance, or when cybersecurity experts fix weaknesses that had previously allowed entry by unauthorized users. The AI may be able to exploit another vulnerability, or start scanning for new ways into the system – without waiting for human instructions. </p>
<p>This could mean that human responders and defenders find themselves unable to keep up with the speed of incoming attacks. It may result in a <a href="https://doi.org/10.1016/j.techsoc.2015.12.003">programming and technological arms race</a>, with defenders developing AI assistants to identify and protect against attacks – or perhaps even AI’s with <a href="https://theconversation.com/cybersecuritys-next-phase-cyber-deterrence-67090">retaliatory attack capabilities</a>.</p>
<h2>Avoiding the dangers</h2>
<p>Operating autonomously could lead AI systems to attack a system it shouldn’t, or <a href="https://www.theguardian.com/world/2015/jul/02/robot-kills-worker-at-volkswagen-plant-in-germany">cause unexpected damage</a>. For example, software started by an attacker intending only to steal money might decide to target a hospital computer in a way that causes human injury or death. The potential for <a href="https://doi.org/10.2139/ssrn.2283767">unmanned aerial vehicles to operate autonomously</a> has raised similar questions of the need for <a href="https://theconversation.com/losing-control-the-dangers-of-killer-robots-58262">humans to make the decisions about targets</a>. </p>
<p>The consequences and implications are significant, but most people won’t notice a big change when the first AI attack is unleashed. For most of those affected, the outcome will be the same as human-triggered attacks. But as we continue to fill our homes, factories, offices and roads with internet-connected robotic systems, the potential effects of an attack by artificial intelligence only grows.</p><img src="https://counter.theconversation.com/content/82035/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jeremy Straub is the Associate Director of the NDSU Institute for Cyber Security Education and Research. </span></em></p>It won’t be like an army of robots marching in the streets, but AI hacking is on the horizon.Jeremy Straub, Assistant Professor of Computer Science, North Dakota State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/829632017-08-25T12:20:46Z2017-08-25T12:20:46ZNever mind killer robots – even the good ones are scarily unpredictable<figure><img src="https://images.theconversation.com/files/183350/original/file-20170824-18702-1fxlsp9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Who could have predicted it would end like this?</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>The heads of more than 100 of the world’s top artificial intelligence companies are very alarmed about the development of “killer robots”. In an <a href="https://futureoflife.org/autonomous-weapons-open-letter-2017">open letter</a> to the UN, these business leaders – including Tesla’s Elon Musk and the founders of Google’s DeepMind AI firm – warned that autonomous weapon technology could be misused by terrorists and despots or hacked to perform in undesirable ways.</p>
<p>But the real threat is much bigger – and not just from human misconduct but from the machines themselves. The research into complex systems shows how behaviour can emerge that is much more unpredictable than the sum of individual actions. On one level this means human societies can behave very differently to what you might expect just looking at individual behaviour. But it can also apply to technology. Even ecosystems of relatively simple AI programs – what we call stupid, good bots – can surprise us, and even when the individual bots are behaving well.</p>
<p>The individual elements that make up complex systems, such as economic markets or global weather, tend not to interact in a simple linear way. This make these systems very hard to model and understand. For example, even after many years of climatology, it’s still impossible to make long-term weather predictions. These systems are often very sensitive to small changes and can experience explosive feedback loops. It is also very difficult to know the precise state of such a system at any one time. All these things make these systems intrinsically unpredictable. </p>
<p>All these principles apply to large groups of individuals acting in their own way, whether that’s human societies or groups of AI bots. My colleagues and I <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0171774">recently studied</a> one type of a complex system that featured good bots used to automatically edit Wikipedia articles. These different bots are designed and exploited by Wikipedia’s trusted human editors and their underlying software is open-source and available for anyone to study. Individually, they all have a common goal of improving the encyclopaedia. Yet their collective behaviour turns out to be surprisingly inefficient.</p>
<p>These Wikipedia bots work based on well-established rules and conventions, but because the website doesn’t have a central management system there is no effective coordination between the people running different bots. As a result, we found pairs of bots that have been undoing each other’s edits for several years without anyone noticing. And of course, because these bots lack any cognition, they didn’t notice it either.</p>
<p>The bots are designed to speed up the editing process. But slight differences in the design of the bots or between people who use them can lead to a massive waste of resources in an ongoing “edit war” that would have been resolved much quicker with human editors.</p>
<p>We also found that the bots behaved differently in different language editions of Wikipedia. The rules are more or less the same, the goals are identical, the technology is similar. But in German Wikipedia, the collaboration between bots is much more efficient and productive compared to, for example, Portuguese Wikipedia. This can only be explained by the differences between the human editors who run these bots in different environments.</p>
<h2>Exponential confusion</h2>
<p>Wikipedia bots have very little autonomy and the system already operates very differently to the goals of individual bots. But the Wikimedia Foundation is <a href="https://blog.wikimedia.org/2017/07/19/scoring-platform-team/">planning to use</a> AI that will give more autonomy to the bots. That will likely lead to even more unexpected behaviour. </p>
<p>Another example is what can happen when two bots designed to speak to humans interact with each other. We’re no longer surprised by the answers given by artificial personal assistants such as the iPhone’s Siri. But put several of these kind of chatbots together and they can quickly start acting in surprising ways, arguing and even insulting each other. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/WnzlbyTZsQY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>The bigger the system becomes and the more autonomous each bot is, the more complex and hence unpredictable the future behaviour of the system will be. Wikipedia is an example of large number of relatively simple bots. The chatbots example is a small number of rather sophisticated and creative bots – in both cases unexpected conflicts emerged. The complexity and therefore unpredictability increases exponentially as you add more and more individuals to the system. So in a future system with a large number of very sophisticated robots, the unexpected behaviour could go beyond our imagination.</p>
<h2>Self-driving madness</h2>
<p>For example, self-driving cars promise exciting advances in the efficiency and safety of road travel. But we don’t yet know what will happen once we have a large, wild system of fully autonomous vehicles. They may well behave very differently to a small set of individual cars in a controlled environment. And even more unexpected behaviour might occur when driverless cars “trained” by different humans in different environments start interacting with each another.</p>
<p>Humans can adapt to new rules and conventions relatively quickly but can still have trouble switching between systems. This can be way more difficult for artificial agents. If a “German-trained” car was driving in Italy, for example, we just don’t know how it would deal with the written rules and unwritten cultural conventions being followed by the many other “Italian-trained” cars. Something as common as crossing an intersection could become lethally risky because we just wouldn’t know if the cars would interact as they were supposed to or whether they would do something completely unpredictable.</p>
<p>Now think of the killer robots that Elon Musk and his colleagues are worried about. A single killer robot could be very dangerous in wrong hands. But what about an unpredictable system of killer robots? I don’t even want to think about it.</p><img src="https://counter.theconversation.com/content/82963/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Taha Yasseri receives funding from the European Commission and Google. </span></em></p>The unexpected behaviour of even simple bots is only going to get more dramatic as AI scales up.Taha Yasseri, Research Fellow in Computational Social Science, Oxford Internet Institute, University of OxfordLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/817922017-08-08T16:07:14Z2017-08-08T16:07:14ZThe new industrial revolution: robots are an opportunity, not a threat<figure><img src="https://images.theconversation.com/files/181365/original/file-20170808-10926-izvxcu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">shutterstock</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/robotic-arm-painting-car-assembly-shop-689621176">Shutterstock</a></span></figcaption></figure><p>Invasion. Takeover. These are the kind of words that have been bandied about in news headlines about robotics and artificial intelligence in the last few years. The coverage has been almost relentlessly negative, focusing on the threat to jobs, squeezing out the human component. While such potential is there, if robotics and AI do become a threat, then we believe this would be a threat of society’s own choosing.</p>
<p>The <a href="https://www.gov.uk/government/news/robotics-and-autonomous-systems-apply-for-innovation-funding">market impact</a> of robotics and <a href="https://www.techopedia.com/definition/11063/autonomous-system-as">autonomous systems</a> is estimated to be US$9.8 to US$19.3 trillion a year by 2025, but a recent <a href="https://connect.innovateuk.org/documents/2903012/16074728/RAS%20UK%20Strategy">report</a> from the <a href="https://www.suttontrust.com/about-us/">Sutton Trust</a> stressed concerns that this could lead to a two-tier society with:</p>
<blockquote>
<p>An elite high-skilled group dominating the higher echelon of society and a lower-skilled, low-income group with limited prospects of up-skilling and hence upward mobility, resulting in a broken social ladder.</p>
</blockquote>
<p>Technical innovation has always had an impact on the status quo and stirred fears of what change might bring. Currently the fear is that the owners of the means of production will become rich, while other will see their jobs and livelihoods taken by robots.</p>
<h2>Living in a connected era</h2>
<p>The revolution in robotics and autonomous systems has already begun. We live in a connected era where affordable technology interacts with us and with other natural and physical assets in our environment, turning data into information for a global audience.</p>
<p>AI has the ability to bring expert knowledge to the lay person remotely, that is, anywhere in the world, and support them in their endeavours like a virtual mentor, customising information in a useable format they can engage with. And by giving people this knowledge it offers unprecedented opportunities ranging from garden shed innovators gaining access to manufacturing processes which would previously been beyond their reach, to the potential for wealth creation in countries that are most in need of it. </p>
<p>For example, AI is helping communities in developing nations implement local renewable energy systems by providing intelligent automation and monitoring – almost like an online “doctor”, making sure the system is “healthy” and working properly. This means communities not only gain access to affordable and sustainable energy but can also engage in trading of any surplus energy to other consumers or utility companies.</p>
<p>But to achieve these benefits society has to be ready to grasp them. Governments, business and academia all have a responsibility to prepare the current and future workforce for the imminent and dramatic changes to come, and society as a whole has to buy into this new industrial revolution.</p>
<p>Contrary to the perceived apocalyptic scenario, we believe the future is all about people: after all, the value of technologies is in the knowledge we humans embed within them and how we interact with them, not the machines themselves. For that human-based scenario to work we need an <a href="https://www.agilebusiness.org/business-agility">agile</a>, future-ready workforce, ready to embrace a data-driven world in partnership with robotics and autonomous systems.</p>
<p>An existing example is the Siemens “<a href="http://www.rcrwireless.com/20160810/internet-of-things/lights-out-manufacturing-tag31-tag99">lights out</a>” manufacturing plant in Amberg, Germany, which is automated to the point where some lines can run unsupervised for several weeks. This is viewed as a stepping stone towards a fully self-organising factory that would allow the manufacture of highly customisable products. Yet this automated factory has 1,150 employees supporting it, just with different roles focused on programming, monitoring and machine maintenance.</p>
<h2>The new industrial revolution</h2>
<p>Since the first industrial revolution people have generally had to follow the technology, moving to the big cities or from developing nations to developed countries, to improve their access to the opportunities offered by technology.</p>
<p>At Heriot-Watt we are embracing both the newest technological opportunities and the challenges that they bring. Our work involves engaging with government and industry around a <a href="http://paidcontent.afr.com/accenture/strategy/article/5-trends-reshaping-people-first-ethos-digital-age/">people first</a> ethos via our <a href="https://www.hw.ac.uk/schools/engineering-physical-sciences/institutes/embedded-intelligence-cdt.htm">Centre of Embedded Innovation</a> (CEI) concept, which is reaching out to all of our communities to enable access to the latest advances in data analysis, robotics and autonomous systems.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/181376/original/file-20170808-22949-1a6cul2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/181376/original/file-20170808-22949-1a6cul2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/181376/original/file-20170808-22949-1a6cul2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/181376/original/file-20170808-22949-1a6cul2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/181376/original/file-20170808-22949-1a6cul2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/181376/original/file-20170808-22949-1a6cul2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/181376/original/file-20170808-22949-1a6cul2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/181376/original/file-20170808-22949-1a6cul2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The rise of robots in industry means we must look at new ways of working and collaborating with them.</span>
</figcaption>
</figure>
<p><a href="http://www.suttontrust.com/wp-content/uploads/2016/12/International-inequalities_FINAL.pdf">Studies</a> by the Sutton Trust have indicated that in the past decade in the UK, the gap between rich and poor in society has increased, with inequality now at a record high. The CEI seeks to <a href="https://www.hw.ac.uk/schools/engineering-physical-sciences/institutes/embedded-intelligence-cdt.htm">address</a> this societal imbalance through making opportunity more equal, in which education plays the key role, and helping to create a new “embedded” industrial revolution which supports business and economic growth, transferring knowledge and resources to those communities most at risk of poverty.</p>
<p>Wealth creation and innovation must be centred on people to achieve a prosperous future that is inclusive throughout society. It’s also clear that the speed of this industrial revolution warrants a transformative change in strategy through government, industry and academia.</p><img src="https://counter.theconversation.com/content/81792/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Flynn receives funding from Engineering and Physical Sciences Research Council (EPSRC) and InnovateUK in a number of research grants.</span></em></p><p class="fine-print"><em><span>Valentin Robu receives funding from Engineering and Physical Sciences Research Council (EPSRC) and InnovateUK in a number of research grants.</span></em></p>If robots and AI are our future, we need to embrace the technology and work out how best to collaborate and make it work for everyoneDavid Flynn, Associate Professor, Embedded Intelligence in Energy Systems, Heriot-Watt UniversityValentin Robu, Lecturer in Smart Grids, Heriot-Watt UniversityLicensed as Creative Commons – attribution, no derivatives.