tag:theconversation.com,2011:/us/topics/risk-innovation-28697/articlesRisk innovation – The Conversation2017-10-19T21:58:06Ztag:theconversation.com,2011:article/858662017-10-19T21:58:06Z2017-10-19T21:58:06Z‘Geostorm’ movie shows dangers of hacking the climate – we need to talk about real-world geoengineering now<figure><img src="https://images.theconversation.com/files/191065/original/file-20171019-1078-lzm6lf.jpg?ixlib=rb-1.1.0&rect=80%2C0%2C1117%2C718&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Is this the endgame for any geoengineering scenario?</span> <span class="attribution"><a class="source" href="https://www.youtube.com/watch?v=ncZ4Bc2xQUM">'Geostorm' still</a></span></figcaption></figure><p>Hollywood’s latest disaster flick, “<a href="http://www.geostorm.movie/">Geostorm</a>,” is premised on the idea that humans have figured out how to control the Earth’s climate. A powerful satellite-based technology allows users to fine-tune the weather, overcoming the ravages of climate change. Everyone, everywhere can quite literally “have a nice day,” until – spoiler alert! – things do not go as planned.</p>
<p>Admittedly, the movie is a fantasy set in a deeply unrealistic near-future. But coming on the heels of one of the most extreme hurricane seasons in recent history, it’s tempting to imagine a world where we could regulate the weather. Despite a <a href="https://cup.columbia.edu/book/fixing-the-sky/9780231144131">long history of interest in weather modification</a>, controlling the climate is, to be frank, unattainable with current technology. But underneath the frippery of “Geostorm,” is there a valid message about the promises and perils of planetary management?</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/EuOlYPSEzSc?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">‘Geostorm’ is far-fetched, but scientists are taking seriously the idea of engineering Earth’s climate.</span></figcaption>
</figure>
<h2>Fiddling with our global climate</h2>
<p>The technology in the movie “Geostorm” is laughably fantastical. But the idea of technologies that might be used to “geoengineer” the climate is not.</p>
<p>Geoengineering, also called <a href="https://royalsociety.org/topics-policy/publications/2009/geoengineering-climate/">climate engineering</a>, is a set of emerging technologies that could potentially offset some of the consequences of climate change. <a href="https://www.scientificamerican.com/article/latest-ipcc-climate-report-puts-geoengineering-in-the-spotlight/">Some scientists are taking it seriously</a>, considering geoengineering among the range of approaches for managing the risks of climate change – although always as a complement to, and not a substitute for, reducing emissions and adapting to the effects of climate change.</p>
<p>These innovations are often lumped into two categories. <a href="https://www.nap.edu/read/18805/chapter/1">Carbon dioxide removal</a> (or negative emissions) technologies set out to actively remove greenhouse gases from the atmosphere. In contrast, <a href="https://www.nap.edu/read/18988/chapter/1">solar radiation management</a> (or solar geoengineering) aims to reduce how much sunlight reaches the Earth.</p>
<p>Because <a href="http://fore.yale.edu/climate-change/science/the-greenhouse-effect-and-the-bathtub-effect/">it takes time for the climate to respond to changes</a>, even if we stopped emitting greenhouse gases today, some level of climate change – and its associated risks – is unavoidable. Advocates of solar geoengineering argue that, if done well, these technologies <a href="http://onlinelibrary.wiley.com/doi/10.1002/2016EF000465/epdf">might help limit some effects</a>, including sea level rise and changes in weather patterns, and do so quickly.</p>
<p>But as might be expected, the idea of intentionally tinkering with the Earth’s atmosphere to curb the impacts of climate change is controversial. Even <a href="http://www.etcgroup.org/fr/node/5282">conducting research into climate engineering</a> raises some hackles.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/191125/original/file-20171019-1088-trs59g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/191125/original/file-20171019-1088-trs59g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/191125/original/file-20171019-1088-trs59g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/191125/original/file-20171019-1088-trs59g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/191125/original/file-20171019-1088-trs59g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/191125/original/file-20171019-1088-trs59g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/191125/original/file-20171019-1088-trs59g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/191125/original/file-20171019-1088-trs59g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Shading the Earth from the sun’s rays shouldn’t be a solitary pursuit.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/The-Sunshade-Option/11e7b635e1fe4de6b8aad26d8a5f4483/4/0">AP Photo/Channi Anand</a></span>
</figcaption>
</figure>
<h2>Global stakes are high</h2>
<p>Geoengineering could reshape our world in fundamental ways. Because of the global impacts that will inevitably accompany attempts to engineer the planet, this isn’t a technology where some people can selectively opt in or opt out out of it: Geoengineering has the potential to affect everyone. Moreover, it raises profound questions about <a href="https://www.theguardian.com/science/political-science/2013/jul/29/messing-nature-geoengineering-green-thought">humans’ relationship to nonhuman nature</a>. The conversations that matter are ultimately less about the technology itself and more about what we collectively stand to gain or lose politically, culturally and socially.</p>
<p>Much of the debate around how advisable geoengineering research is has focused on solar geoengineering, not carbon dioxide removal. One of the worries here is that figuring out aspects of solar geoengineering could lead us down a <a href="https://thebulletin.org/trouble-geoengineers-%E2%80%9Chacking-planet%E2%80%9D10858">slippery slope to actually doing it</a>. Just doing research could make deploying solar geoengineering more likely, even if it proves to be a really bad idea. And it comes with the risk that the techniques might be bad for some while good for others, potentially exacerbating existing inequalities, or creating new ones. </p>
<p>For example, <a href="http://onlinelibrary.wiley.com/doi/10.1029/2008JD010050/abstract">early studies using computer models</a> indicated that injecting particles into the stratosphere to cool parts of Earth might disrupt the Asian and African summer monsoons, threatening the food supply for billions of people. Even if deployment <a href="http://onlinelibrary.wiley.com/doi/10.1002/2016EF000416/full">wouldn’t necessarily result in regional inequalities</a>, the prospect of solar geoengineering raises questions about <a href="https://link.springer.com/article/10.1007/s10784-017-9377-6">who has the power to shape our climate futures</a>, and who and what gets left out.</p>
<p>Other concerns focus on possible unintended consequences of large-scale open-air experimentation – especially when our whole planet becomes the lab. There’s a fear that the consequences would be irreversible, and that <a href="http://science.sciencemag.org/content/327/5965/530.full">the line between research and deployment is inherently fuzzy</a>. </p>
<p>And then there’s the distraction problem, often known as the “<a href="http://www.slate.com/articles/technology/future_tense/2016/01/geoengineering_might_give_people_an_excuse_to_ignore_climate_change_s_causes.html">moral hazard</a>.” Even researching geoengineering as one potential response to climate change may distract from the necessary and difficult work of reducing greenhouse gas levels and adapting to a changing climate – not to mention the challenges of encouraging more sustainable lifestyles and practices.</p>
<p>To be fair, many scientists in the small geoengineering community take these concerns very seriously. This was evident in the robust <a href="https://www.carbonbrief.org/geoengineering-scientists-berlin-debate-radicaly-ways-reverse-global-warming">conversations around the ethics and politics of geoengineering</a> at a <a href="http://www.ce-conference.org/cec17-program">recent meeting in Berlin</a>. But there’s still no consensus on whether and how to engage in responsible geoengineering research.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/191062/original/file-20171019-1048-7nvvyo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/191062/original/file-20171019-1048-7nvvyo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/191062/original/file-20171019-1048-7nvvyo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/191062/original/file-20171019-1048-7nvvyo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/191062/original/file-20171019-1048-7nvvyo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/191062/original/file-20171019-1048-7nvvyo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/191062/original/file-20171019-1048-7nvvyo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/191062/original/file-20171019-1048-7nvvyo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Climate outcomes are not good for humanity in the Hollywood version of geoengineering.</span>
<span class="attribution"><a class="source" href="http://www.hollywoodreporter.com/heat-vision/geostorm-trailer-debuts-video-984391">Still from 'Geostorm'</a></span>
</figcaption>
</figure>
<h2>A geostorm in a teacup?</h2>
<p>So how close are we to the dystopian future of “Geostorm”? The truth is that geoengineering is still little more than a twinkle in the eyes of a small group of scientists. In the words of Jack Stilgoe, author of the book “<a href="https://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=jack+stilgoe">Experiment Earth: Responsible innovation in geoengineering</a>”:</p>
<blockquote>
<p>“We shouldn’t be scared of geoengineering, at least not yet. It is neither as exciting nor as terrifying as we have been led to believe, for the simple reason that it doesn’t exist.”</p>
</blockquote>
<p>Compared to other emerging technologies, solar geoengineering has no industrial demand and no strong economic driver as yet, and simply doesn’t appeal to national interests in global competitiveness. Because of this, it’s an idea that’s struggled to translate from the pages of academic papers and newsprint into reality.</p>
<p>Even government agencies appear <a href="http://issues.org/33-3/toward-a-responsible-solar-geoengineering-research-program/">wary of funding outdoor research</a> into solar geoengineering – possibly because it’s an ethically fraught area, but also because it’s an academically interesting idea with no clear economic or political return for those who invest in it.</p>
<p>Yet some supporters make a strong case for knowing more about the potential benefits, risks and efficacy of these ideas. So scientists are beginning to turn to private funding. Harvard University, for instance, recently launched the <a href="https://geoengineering.environment.harvard.edu/">Solar Geoengineering Research Program</a>, funded by Bill Gates, the Hewlett Foundation and others.</p>
<p>As part of this program, researchers David Keith and Frank Keutsch <a href="https://www.technologyreview.com/s/603974/harvard-scientists-moving-ahead-on-plans-for-atmospheric-geoengineering-experiments/">are already planning small-scale experiments</a> to inject fine sunlight-reflecting particles into the stratosphere above Tucson, Arizona. It’s a very small experiment, and <a href="http://www.spice.ac.uk/">wouldn’t be the first</a>, but it aims to generate new information about whether and how such particles might one day be used to control the amount of sunlight reaching the Earth.</p>
<p>And importantly, it suggests that, where governments fear to tread, wealthy individuals and philanthropy may end up pushing the boundaries of geoengineering research – with or without the rest of society’s consent.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/191064/original/file-20171019-1078-zzbo0m.jpg?ixlib=rb-1.1.0&rect=38%2C97%2C1239%2C620&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/191064/original/file-20171019-1078-zzbo0m.jpg?ixlib=rb-1.1.0&rect=38%2C97%2C1239%2C620&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/191064/original/file-20171019-1078-zzbo0m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/191064/original/file-20171019-1078-zzbo0m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/191064/original/file-20171019-1078-zzbo0m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/191064/original/file-20171019-1078-zzbo0m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/191064/original/file-20171019-1078-zzbo0m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/191064/original/file-20171019-1078-zzbo0m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Hollywood’s version of the technology is one thing, but it’s time to talk about what a real future could be.</span>
<span class="attribution"><a class="source" href="http://www.dailymotion.com/video/x5ea2z6">Still from 'Geostorm'</a></span>
</figcaption>
</figure>
<h2>The case for public dialogue</h2>
<p>The upshot is there’s a growing need for public debate around whether and how to move forward.</p>
<p>Ultimately, no amount of scientific evidence is likely to single-handedly resolve wider debates about the benefits and risks – we’ve learned this much from the persistent debates about <a href="http://issues.org/32-1/crispr-democracy-gene-editing-and-the-need-for-inclusive-deliberation/">genetically modified organisms</a>, <a href="https://link.springer.com/article/10.1007%2FBF01682418%20or%20%20http://onlinelibrary.wiley.com/doi/10.1111/j.1539-6924.1987.tb00963.x/abstract">nuclear power</a> and other high-impact technologies.</p>
<p>Leaving these discussions to experts is not only counter to democratic principles but likely to be self-defeating, as more research in complex domains <a href="http://www.sciencedirect.com/science/article/pii/S1462901104000620">can often make controversies worse</a>. The bad news here is that <a href="http://onlinelibrary.wiley.com/doi/10.1002/2016EF000461/full">research on public views about geoengineering</a> (admittedly limited to Europe and the U.S.) suggests that most people are unfamiliar with the idea. The good news, though, is that social science research and practical experience have shown that people have the capacity to <a href="https://ecastnetwork.org/">learn and deliberate on complex technologies</a>, if given the opportunity.</p>
<p>As researchers in the responsible development and use of emerging technologies, we suggest less speculation about the ethics of imagined geoengineered futures, which can sometimes close down, rather than open up, decision-making about these technologies. Instead, we need more rigor in how we think about near-term choices around researching these ideas in ways that respond to social norms and contexts. This includes thinking hard about whether and how to govern privately funded research in this domain. And uncomfortable as it may feel, it means that scientists and political leaders need to remain open to the possibility that societies will not want to develop these ideas at all. </p>
<p>All of this is a far cry from the Hollywood hysteria of “Geostorm.” Yet decisions about geoengineering research are already being made in real life. We probably won’t have satellite-based weather control any time soon. But if scientists intend to research technologies to deliberately intervene in our climate system, we need to start talking seriously about whether and how to collectively, and responsibly, move forward.</p><img src="https://counter.theconversation.com/content/85866/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A disaster fantasy raises questions about tinkering with Earth’s climate. With real-life scientists exploring geoengineering, what conversations should we be having now around these technologies?Jane A. Flegal, Ph.D. Candidate, Environmental Science, Policy, and Management, University of California, BerkeleyAndrew Maynard, Director, Risk Innovation Lab, Arizona State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/849482017-10-02T00:52:48Z2017-10-02T00:52:48ZDear Elon Musk: Your dazzling Mars plan overlooks some big nontechnical hurdles<figure><img src="https://images.theconversation.com/files/188262/original/file-20171001-13542-15sjp1f.jpg?ixlib=rb-1.1.0&rect=0%2C787%2C6572%2C5470&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Will it be only a few decades before Mars tourism is a reality?</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/spacex/17504334828">SpaceX</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p><a href="http://www.spacex.com/mars">Elon Musk has a plan</a>, and it’s about as audacious as they come. Not content with living on our pale blue dot, Musk and his company SpaceX want to colonize Mars, fast. They say they’ll send a duo of supply ships to the red planet <a href="http://www.spacex.com/mars">within five years</a>. By 2024, they’re aiming to send the first humans. From there they have visions of building a space port, a city and, ultimately, a planet they’d like to “geoengineer” to be as welcoming as a second Earth.</p>
<p>If he succeeds, Musk could thoroughly transform our relationship with our solar system, inspiring a new generation of scientists and engineers along the way. But between here and success, Musk and SpaceX will need to traverse an unbelievably complex risk landscape.</p>
<p>Many will be technical. The rocket that’s going to take Musk’s colonizers to Mars (code named the “<a href="https://www.theverge.com/2017/9/30/16384096/elon-musk-spacex-bfr-mars-rocket-development-business-demand">BFR</a>” – no prizes for guessing what that stands for) hasn’t even been built yet. No one knows what hidden hurdles will emerge as testing begins. Musk does have a habit of <a href="https://youtu.be/tdUX3ypDVwI?t=6m48s">successfully solving complex engineering problems</a> though; and despite the mountainous technical challenges SpaceX faces, there’s a fair chance they’ll succeed.</p>
<p>As a scholar of <a href="https://riskinnovation.asu.edu/about/">risk innovation</a>, what I’m not sure about is how SpaceX will handle some of the less obvious social and political hurdles they face. To give Elon Musk a bit of a head start, here are some of the obstacles I think he should have on his mission-to-Mars checklist.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/188263/original/file-20171001-19819-1xz2epq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/188263/original/file-20171001-19819-1xz2epq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/188263/original/file-20171001-19819-1xz2epq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/188263/original/file-20171001-19819-1xz2epq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/188263/original/file-20171001-19819-1xz2epq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/188263/original/file-20171001-19819-1xz2epq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/188263/original/file-20171001-19819-1xz2epq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/188263/original/file-20171001-19819-1xz2epq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Musk typically drives hard toward his goals – in this case, Mars.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Mexico-SpaceX-Mars/62bffff820c3428f91a488d5598de802/3/0">AP Photo/Refugio Ruiz</a></span>
</figcaption>
</figure>
<h2>Planetary protection</h2>
<p>Imagine there was once life on Mars, but in our haste to set up shop there, we obliterate any trace of its existence. Or imagine that harmful organisms exist on Mars and spacecraft inadvertently bring them back to Earth.</p>
<p>These are scenarios that keep astrobiologists and <a href="https://planetaryprotection.nasa.gov/overview">planetary protection specialists</a> awake at night. They’ve led to unbelievably stringent <a href="https://cosparhq.cnes.fr/sites/default/files/pppolicy.pdf">international policies</a> around what can and cannot be done on government-sponsored space missions.</p>
<p>Yet Musk’s plans threaten to throw the rule book on planetary protection out the window. As a private company SpaceX isn’t directly bound by international planetary protection policies. And while some governments could wrap the company up in space bureaucracy, they’ll find it hard to impose the same levels of hoop-jumping that NASA missions, for instance, <a href="https://theconversation.com/worries-about-spreading-earth-microbes-shouldnt-slow-search-for-life-on-mars-84742">currently need to navigate</a>.</p>
<p>It’s conceivable (but extremely unlikely) that a laissez-faire attitude toward interplanetary contamination could lead to Martian bugs invading Earth. The bigger risk is stymying our chances of ever discovering whether life existed on Mars before human beings and their grubby microbiomes get there. And the last thing Musk needs is a whole community of disgruntled astrobiologists baying for his blood as he tramples over their turf and robs them of their dreams.</p>
<h2>Ecoterrorism</h2>
<p>Musk’s long-term vision is to terraform Mars – reengineer our neighboring planet as “<a href="https://youtu.be/tdUX3ypDVwI?t=39m48s">a nice place to be</a>” – and <a href="https://doi.org/10.1089/space.2017.29009.emu">allow humans to become a multi-planetary species</a>. Sounds awesome – but not to everyone. I’d wager there will be some people sufficiently appalled by the idea that they decide to take illegal action to interfere with it.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/188266/original/file-20171001-19819-svu80q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/188266/original/file-20171001-19819-svu80q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/188266/original/file-20171001-19819-svu80q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=451&fit=crop&dpr=1 600w, https://images.theconversation.com/files/188266/original/file-20171001-19819-svu80q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=451&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/188266/original/file-20171001-19819-svu80q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=451&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/188266/original/file-20171001-19819-svu80q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=567&fit=crop&dpr=1 754w, https://images.theconversation.com/files/188266/original/file-20171001-19819-svu80q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=567&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/188266/original/file-20171001-19819-svu80q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=567&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Ecoterrorists claimed responsibility in 1998 for burning part of a Colorado ski resort they said threatened animal habitats.</span>
<span class="attribution"><a class="source" href="http://danielglick.net/books/powder-burn/">Vail Fire Department</a></span>
</figcaption>
</figure>
<p>The mythology surrounding ecoterrorism makes it hard to pin down how much of it actually happens. But there certainly are individuals and groups like the <a href="https://en.wikipedia.org/wiki/Earth_Liberation_Front">Earth Liberation Front</a> willing to flout the law in their quest to preserve pristine wildernesses. It’s a fair bet there will be people similarly willing to take extreme action to stop the pristine wilderness of Mars being desecrated by humans.</p>
<p>How this might play out is anyone’s guess, although science fiction novels like Kim Stanley Robinson’s “<a href="https://www.harpercollins.co.uk/9780008121778/the-complete-mars-trilogy">Mars Trilogy</a>” give an interesting glimpse into what could transpire once we get there. More likely, SpaceX will need to be on the lookout for saboteurs crippling their operations before leaving Earth.</p>
<h2>Space politics</h2>
<p>Back in the days before <a href="http://www.huffingtonpost.ca/amyrose-lane/private-space-companies_b_8821120.html">private companies were allowed to send rockets into space</a>, international agreements were signed that set out who could do what outside the Earth’s atmosphere. Under the United Nations <a href="http://www.unoosa.org/oosa/en/ourwork/spacelaw/treaties/introouterspacetreaty.html">Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies</a>, for instance, states agreed to explore space for the benefit of all humankind, not place weapons of mass destruction on celestial bodies and avoid harmful contamination.</p>
<p>That was back in 1967, four years before Elon Musk was born. With the emergence of ambitious private space companies like <a href="https://www.space.com/8541-6-private-companies-launch-humans-space.html">SpaceX, Blue Origin and others</a>, though, who’s allowed to do what in the solar system is less clear. It’s good news for companies like SpaceX – at least in the short term. But this uncertainty is eventually going to crystallize into enforceable space policies, laws and regulations that apply to everyone. And when it does, Musk needs to make sure he’s not left out in the cold.</p>
<p>This is of course policy, not politics. But there are powerful players in the global space policy arena. If they’re rubbed the wrong way, it’ll be politics that determines how resulting policies affect SpaceX.</p>
<h2>Climate change</h2>
<p>Perhaps the biggest danger is that Musk’s vision of colonizing Mars looks too much like a <a href="https://www.treehugger.com/corporate-responsibility/disposable-earth.html">disposable Earth</a> philosophy – we’ve messed up this planet, so time to move on to the next. Of course, this idea may not factor into Musk’s motivation, but in the world of climate change mitigation and adaptation, perceptions matter. The optics of moving to a new planet to escape the mess we’ve made here is not a scenario that’s likely to win too many friends amongst those trying to ensure Earth remains habitable. And these factions wield considerable social and economic power – enough to cause problems for SpaceX if they decide to mobilize over this.</p>
<p>There is another risk here too, thanks to a proposed terrestrial use of SpaceX’s BFR as a <a href="https://phys.org/news/2017-09-spacex-rocket-moon-mars-ny-to-shanghai.html">hyperfast transport between cities</a> on Earth. Musk has recently <a href="http://www.spacex.com/mars">titillated tech watchers</a> with plans to use commercial rocket flights to make any city on Earth less than an hour’s travel from any other. This is part of a larger plan to make the BFR profitable, and help cover the costs of planetary exploration. It’s a crazy idea – that just might work. But what about the environmental impact?</p>
<p>Even though the BFR will spew out tons of the greenhouse gas carbon dioxide, the impacts may not be much greater than current global air travel (depending how many flights end up happening). And there’s always the dream of creating the fuel – methane and oxygen – using solar power and atmospheric gases. The BFR could even conceivably be carbon-neutral one day.</p>
<p>But at a time when humanity should be doing everything in our power to reduce carbon dioxide emissions, the optics aren’t great. And this could well lead to a damaging backlash before rocket-commuting even gets off the ground.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/188264/original/file-20171001-21094-1v547vj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/188264/original/file-20171001-21094-1v547vj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/188264/original/file-20171001-21094-1v547vj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=375&fit=crop&dpr=1 600w, https://images.theconversation.com/files/188264/original/file-20171001-21094-1v547vj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=375&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/188264/original/file-20171001-21094-1v547vj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=375&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/188264/original/file-20171001-21094-1v547vj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=472&fit=crop&dpr=1 754w, https://images.theconversation.com/files/188264/original/file-20171001-21094-1v547vj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=472&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/188264/original/file-20171001-21094-1v547vj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=472&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">When the USSR launched Sputnik on Oct. 4, 1957, it also launched the space race.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Associated-Press-International-News-Czech-Repub-/5916c2079ae5da11af9f0014c2589dfb/4/0">AP Photo</a></span>
</figcaption>
</figure>
<h2>Inspiring – or infuriating?</h2>
<p>Sixty years ago, the Soviet Union launched Sputnik, the world’s first artificial satellite – and changed the world. It was the dawn of the space age, forcing nations to rethink their technical education programs and inspiring a generation of scientists and engineers.</p>
<p>We may well be standing at a similar technological tipping point as researchers develop the vision and technologies that could launch humanity into the solar system. But for this to be a new generation’s Sputnik moment, we’ll need to be smart in navigating the many social and political hurdles between where we are now and where we could be.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/188267/original/file-20171001-19343-87cy78.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/188267/original/file-20171001-19343-87cy78.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/188267/original/file-20171001-19343-87cy78.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/188267/original/file-20171001-19343-87cy78.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/188267/original/file-20171001-19343-87cy78.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/188267/original/file-20171001-19343-87cy78.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/188267/original/file-20171001-19343-87cy78.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/188267/original/file-20171001-19343-87cy78.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Imagine the possibilities….</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/spacex/17071818163">SpaceX</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>These nontechnical hurdles come down to whether society writ large grants SpaceX and Elon Musk the freedom to boldly go where no one has gone before. It’s tempting to think of planetary entrepreneurialism as simply getting the technology right and finding a way to pay for it. But if enough people feel SpaceX is threatening what they value (such as the environment – here or there), or disadvantaging them in some way (for example, by allowing rich people to move to another planet and abandoning the rest of us here), they’ll make life difficult for the company.</p>
<p>This is where Musk and SpaceX need to be as socially adept as they are technically talented. Discounting these hidden hurdles could spell disaster for Elon Musk’s Mars in the long run. Engaging with them up front could lead to the first people living and thriving on another planet in my lifetime.</p><img src="https://counter.theconversation.com/content/84948/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrew Maynard does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Musk’s audacious plan to blast people to Mars by 2024 glosses over some important social and political challenges that SpaceX will need to successfully navigate to get off the ground.Andrew Maynard, Director, Risk Innovation Lab, Arizona State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/777592017-06-14T02:23:03Z2017-06-14T02:23:03ZHelping or hacking? Engineers and ethicists must work together on brain-computer interface technology<figure><img src="https://images.theconversation.com/files/173203/original/file-20170609-4841-73vkw2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A subject plays a computer game as part of a neural security experiment at the University of Washington.</span> <span class="attribution"><span class="source">Patrick Bennett</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>In the 1995 film <a href="http://www.imdb.com/title/tt0112462/">“Batman Forever</a>,” the Riddler used 3-D television to secretly access viewers’ most personal thoughts in his hunt for Batman’s true identity. By 2011, the metrics company <a href="http://www.nielsen.com/us/en/press-room/2011/nielsen-acquires-neurofocus.html">Nielsen had acquired Neurofocus</a> and had created a “consumer neuroscience” division that uses <a href="http://www.nielsen.com/us/en/solutions/capabilities/consumer-neuroscience.html">integrated conscious and unconscious data</a> to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.</p>
<p>Recent announcements <a href="https://www.theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-computer-interface-ai-cyborgs">by Elon Musk</a> <a href="https://techcrunch.com/2017/04/19/facebook-brain-interface/">and Facebook</a> about <a href="https://theconversation.com/melding-mind-and-machine-how-close-are-we-75589">brain-computer interface (BCI) technology</a> are just the latest headlines in an ongoing science-fiction-becomes-reality story.</p>
<p>BCIs use brain signals to control objects in the outside world. They’re a potentially world-changing innovation – imagine being paralyzed but able to “reach” for something with a prosthetic arm <a href="http://www.slate.com/blogs/future_tense/2012/12/21/jan_scheuermann_footage_of_paralyzed_woman_eating_chocolate_with_robotic.html">just by thinking about it</a>. But the revolutionary technology also raises concerns. Here at the University of Washington’s Center for Sensorimotor Neural Engineering (<a href="http://www.csne-erc.org/">CSNE</a>) we and our colleagues are researching BCI technology – and a crucial part of that includes working on issues such as neuroethics and neural security. Ethicists and engineers are working together to understand and quantify risks and develop ways to protect the public now. </p>
<h2>Picking up on P300 signals</h2>
<p>All BCI technology relies on being able to collect information from a brain that a device can then use or act on in some way. There are numerous places from which signals can be recorded, as well as infinite ways the data can be analyzed, so there are many possibilities for how a BCI can be used.</p>
<p>Some BCI researchers zero in on one particular kind of regularly occurring brain signal that alerts us to important changes in our environment. Neuroscientists call these signals “<a href="https://doi.org/10.4103/0972-6748.57865">event-related potentials</a>.” In the lab, they help us identify a reaction to a stimulus.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=417&fit=crop&dpr=1 600w, https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=417&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=417&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=524&fit=crop&dpr=1 754w, https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=524&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=524&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Examples of event-related potentials (ERPs), electrical signals produced by the brain in response to a stimulus.</span>
<span class="attribution"><span class="source">Tamara Bonaci</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>In particular, we capitalize on one of these specific signals, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2715154/">called the P300</a>. It’s a positive peak of electricity that occurs toward the back of the head about 300 milliseconds after the stimulus is shown. The P300 alerts the rest of your brain to an “oddball” that stands out from the rest of what’s around you.</p>
<p>For example, you don’t stop and stare at each person’s face when you’re searching for your friend at the park. Instead, if we were recording your brain signals as you scanned the crowd, there would be a detectable P300 response when you saw someone who could be your friend. The P300 carries an unconscious message alerting you to something important that deserves attention. These signals are part of a still unknown brain pathway that aids in detection and focusing attention.</p>
<h2>Reading your mind using P300s</h2>
<p>P300s reliably occur any time you notice something rare or disjointed, like when you find the shirt you were looking for in your closet or your car in a parking lot. Researchers can use the P300 in an experimental setting to determine what is important or relevant to you. That’s led to the creation of devices like spellers that allow paralyzed individuals to type using their thoughts, <a href="https://doi.org/10.1016/0013-4694(88)90149-6">one character at a time</a>.</p>
<p>It also can be used to determine what you know, in what’s called a “<a href="https://dx.doi.org/10.3109/00207458808985770">guilty knowledge test</a>.” In the lab, subjects are asked to choose an item to “steal” or hide, and are then shown many images repeatedly of both unrelated and related items. For instance, subjects choose between a watch and a necklace, and are then shown typical items from a jewelry box; a P300 appears when the subject is presented with the image of the item he took.</p>
<p>Everyone’s P300 is unique. In order to know what they’re looking for, researchers need “training” data. These are previously obtained brain signal recordings that researchers are confident contain P300s; they’re then used to calibrate the system. Since the test measures an unconscious neural signal that you don’t even know you have, can you fool it? Maybe, if you <a href="https://doi.org/10.1111/j.1469-8986.2004.00158.x">know that you’re being probed and what the stimuli are</a>.</p>
<p>Techniques like these are still considered unreliable and unproven, and thus U.S. courts have <a href="https://doi.org/10.1176/ps.2007.58.4.460">resisted admitting P300 data as evidence</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">For now, most BCI technology relies on somewhat cumbersome EEG hardware that is definitely not stealth.</span>
<span class="attribution"><span class="source">Mark Stone, University of Washington</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Imagine that instead of using a P300 signal to solve the mystery of a “stolen” item in the lab, someone used this technology to extract information about what month you were born or which bank you use – without your telling them. Our research group has <a href="https://digital.lib.washington.edu/researchworks/handle/1773/33808">collected data suggesting this is possible</a>. Just using an individual’s brain activity – specifically, their P300 response – we could determine a subject’s preferences for things like favorite coffee brand or favorite sports.</p>
<p>But we could do it only when subject-specific training data were available. What if we could figure out someone’s preferences without previous knowledge of their brain signal patterns? Without the need for training, users could simply put on a device and go, skipping the step of loading a personal training profile or spending time in calibration. Research on trained and untrained devices is the subject of <a href="http://brl.ee.washington.edu/neural-engineering/bci-security/">continuing experiments at the University of Washington</a> <a href="https://perso.uclouvain.be/fstandae/PUBLIS/190.pdf">and elsewhere</a>. </p>
<p>It’s when the technology is able to “read” someone’s mind who isn’t actively cooperating that ethical issues become particularly pressing. After all, we willingly trade bits of our privacy all the time – when we open our mouths to have conversations or use GPS devices that allow companies to collect data about us. But in these cases we consent to sharing what’s in our minds. The difference with next-generation P300 technology under development is that the protection consent gives us may get bypassed altogether.</p>
<p>What if it’s possible to decode what you’re thinking or planning without you even knowing? Will you feel violated? Will you feel a loss of control? Privacy implications may be wide-ranging. Maybe advertisers could know your preferred brands and send you personalized ads – which may be convenient or creepy. Or maybe malicious entities could determine where you bank and your account’s PIN – which would be alarming. </p>
<h2>With great power comes great responsibility</h2>
<p>The potential ability to determine individuals’ preferences and personal information using their own brain signals has spawned a number of difficult but pressing questions: Should we be able to keep our neural signals private? That is, should neural security <a href="https://doi.org/10.1186/s40504-017-0050-1">be a human right</a>? How do we <a href="https://dx.doi.org/10.2139/ssrn.2427564">adequately protect and store all the neural data</a> being recorded for research, and soon for leisure? How do consumers know if any protective or anonymization measures are being made with their neural data? As of now, neural data collected for commercial uses are not subject to the same legal protections covering <a href="https://www.hhs.gov/hipaa/index.html">biomedical research or health care</a>. Should neural data be treated differently?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Neuroethicists from the UW Philosophy department discuss issues related to neural implants.</span>
<span class="attribution"><span class="source">Mark Stone, University of Washington</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>These are the kinds of conundrums that are best addressed by neural engineers and ethicists working together. Putting ethicists in labs alongside engineers – <a href="http://www.csne-erc.org/research/neuroethics">as we have done at the CSNE</a> – is one way to ensure that privacy and security risks of neurotechnology, as well as other ethically important issues, are an active part of the research process instead of an afterthought. For instance, Tim Brown, an ethicist at the CSNE, is “housed” within a neural engineering research lab, allowing him to have daily conversations with researchers about ethical concerns. He’s also easily able to interact with – and, in fact, interview – research subjects about their <a href="http://www.csne-erc.org/engage-enable/post/ethics-cornerstone-neural-engineering-research">ethical concerns about brain research</a>. </p>
<p>There are important ethical and legal lessons to be drawn about technology and privacy from other areas, such as <a href="https://www.genome.gov/27561246/privacy-in-genomics">genetics</a> and <a href="http://www.theneuroethicsblog.com/2011/08/ethical-dimenstions-of-neuromarketing.html">neuromarketing</a>. But there seems to be something important and different about reading neural data. They’re more intimately connected to the mind and who we take ourselves to be. As such, ethical issues raised by BCI demand special attention.</p>
<h2>Working on ethics while tech’s in its infancy</h2>
<p>As we wrestle with how to address these privacy and security issues, there are two features of current P300 technology that will buy us time.</p>
<p>First, most commercial devices available use dry electrodes, which rely solely on skin contact to conduct electrical signals. This technology is prone to a low signal-to-noise ratio, meaning that we can extract only relatively basic forms of information from users. The brain signals we record are known to be highly variable (even for the same person) due to things like electrode movement and the constantly changing nature of brain signals themselves. Second, electrodes are not always in ideal locations to record.</p>
<p>All together, this inherent lack of reliability means that BCI devices are not nearly as ubiquitous today as they may be in the future. As electrode hardware and signal processing continue to improve, it will be easier to continuously use devices like these, and make it easier to extract personal information from an unknowing individual as well. The safest advice would be to not use these devices at all.</p>
<p>The goal should be that the ethical standards and the technology will mature together to ensure future BCI users are confident their privacy is being protected as they use these kinds of devices. It’s a rare opportunity for scientists, engineers, ethicists and eventually regulators to work together to create even better products than were originally dreamed of in science fiction.</p><img src="https://counter.theconversation.com/content/77759/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Eran Klein a member of the Center for Sensorimotor Neural Engineering (CSNE) at the University of Washington which receives funding from the National Science Foundation (NSF).</span></em></p><p class="fine-print"><em><span>Katherine Pratt works for the Electrical Engineering department at the University of Washington in Seattle, and is affiliated with the Center for Sensorimotor Neural Engineering (CSNE). Katherine Pratt receives funding from the National Science Foundation and Technology Policy Lab, and has also previously received support from Google. The CSNE partners with the companies listed at <a href="http://csne-erc.org/content/current-members">http://csne-erc.org/content/current-members</a></span></em></p>BCI devices that read minds and act on intentions can change lives for the better. But they could also be put to nefarious use in the not-too-distant future. Now’s the time to think about risks.Eran Klein, Adjunct Assistant Professor of Neurology at Oregon Health and Sciences University and Affiliate Assistant Professor of Philosophy, University of WashingtonKatherine Pratt, Ph.D. Student in Electrical Engineering, University of WashingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/652152016-09-14T10:05:33Z2016-09-14T10:05:33ZConsidering ethics now before radically new brain technologies get away from us<figure><img src="https://images.theconversation.com/files/137669/original/image-20160913-4955-1hxmw14.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Now's the time to think about what we're getting into with neurotechnologies.</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/pic.mhtml?id=133182821">Brain image via www.shutterstock.com.</a></span></figcaption></figure><p>Imagine infusing thousands of wireless devices into your brain, and using them to both monitor its activity and directly influence its actions. It sounds like the stuff of science fiction, and for the moment it still is – but possibly not for long.</p>
<p>Brain research is on a roll at the moment. And as it converges with advances in science and technology more broadly, it’s transforming what we are likely to be able to achieve in the near future. </p>
<p>Spurring the field on is the promise of more effective treatments for debilitating neurological and psychological disorders such as <a href="http://www.ninds.nih.gov/disorders/epilepsy/epilepsy.htm">epilepsy</a>, <a href="http://www.ninds.nih.gov/disorders/parkinsons_disease/parkinsons_disease.htm">Parkinson’s disease</a> and <a href="https://www.nimh.nih.gov/health/topics/depression/index.shtml">depression</a>. But new brain technologies will increasingly have the potential to alter how someone thinks, feels, behaves and even perceives themselves and others around them – and not necessarily in ways that are within their control or with their consent.</p>
<p>This is where things begin to get ethically uncomfortable.</p>
<p>Because of concerns like these, the U.S. National Academies of Sciences, Engineering and Medicine (NAS) are <a href="http://www.nationalacademies.org/hmd/Activities/Research/NeuroForum/2016-SEP-15.aspx">cohosting a meeting of experts this week</a> on responsible innovation in brain science.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/oO0zy30n_jQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Berkeley’s ‘neural dust’ sensors are one of the latest neurotech advances.</span></figcaption>
</figure>
<h2>Where are neurotechnologies now?</h2>
<p>Brain research is intimately entwined with advances in the “neurotechnologies” that not only help us study the brain’s inner workings, but also transform the ways we can interact with and influence it.</p>
<p>For example, researchers at the University of California Berkeley recently <a href="http://news.berkeley.edu/2016/08/03/sprinkling-of-neural-dust-opens-door-to-electroceuticals/">published the first in-animal trials of what they called “neural dust”</a> – implanted millimeter-sized sensors. They inserted the sensors in <a href="http://dx.doi.org/10.1016/j.neuron.2016.06.034">the nerves and muscles of rats</a>, showing that these miniature wirelessly powered and connected sensors can monitor neural activity. The long-term aim, though, is to introduce thousands of neural dust particles <a href="http://arxiv.org/abs/1307.2196">into human brains</a>.</p>
<p>The UC Berkeley sensors are still relatively large, on par with a coarse piece of sand, and just report on what’s happening around them. Yet advances in nanoscale fabrication are likely to enable their further miniaturization. (The researchers estimate they could be made <a href="https://arxiv.org/abs/1307.2196">thinner than a human hair</a>.) And in the future, combining them with technologies like <a href="http://www.scientificamerican.com/article/optogenetics-controlling/">optogenetics</a> – using light to stimulate genetically modified neurons – could enable wireless, localized brain interrogation and control.</p>
<p>Used in this way, future generations of neural dust could transform how chronic neurological disorders are managed. They could also enable hardwired brain-computer interfaces (the <a href="https://arxiv.org/abs/1307.2196">original motivation behind this research</a>), or even be used to enhance cognitive ability and modify behavior.</p>
<p>In 2013, President Obama launched the multi-year, multi-million dollar <a href="https://www.whitehouse.gov/BRAIN">U.S. BRAIN Initiative</a> (Brain Research through Advancing Innovative Neurotechnologies). The same year, the European Commission launched the <a href="https://www.humanbrainproject.eu/">Human Brain Project</a>, focusing on advancing brain research, cognitive neuroscience and brain-inspired computing. There are also active brain research initiatives in <a href="https://www.sfn.org/news-and-calendar/neuroscience-quarterly/spring-2016/china-qa">China</a>, <a href="http://rstb.royalsocietypublishing.org/content/370/1668/20140310">Japan</a>, <a href="http://english.yonhapnews.co.kr/business/2016/05/30/0504000000AEN20160530008200320.html">Korea</a>, <a href="http://www.labman.org/">Latin America</a>, <a href="http://israelbrain.org/">Israel</a>, <a href="http://bluebrain.epfl.ch/">Switzerland</a>, <a href="http://www.braincanada.ca/">Canada</a> and even <a href="http://www.ncbi.nlm.nih.gov/pubmed/21870466">Cuba</a>.</p>
<p>Together, these represent an emerging and globally coordinated effort to not only better understand how the brain works, but to find new ways of controlling and enhancing it (in particular in disease treatment and prevention); to interface with it; and to build computers and other artificial systems that are inspired by it.</p>
<h2>Cutting-edge tech comes with ethical questions</h2>
<p>This week’s <a href="http://www.nationalacademies.org/hmd/Activities/Research/NeuroForum/2016-SEP-15.aspx">NAS workshop</a> – organized by the <a href="https://www.innovationpolicyplatform.org/project-emerging-technologies-and-brain-oecd-bnct">Organization for Economic Cooperation and Development</a> and supported by the National Science Foundation and my home institution of Arizona State University – isn’t the first gathering of experts to discuss the ethics of brain technologies. In fact there’s already an active international community of experts addressing “<a href="https://en.wikipedia.org/wiki/Neuroethics">neuroethics</a>.”</p>
<p>Many of these scientific initiatives do have a prominent ethics component. The U.S. BRAIN initiative for example includes a <a href="https://www.braininitiative.nih.gov/about/newg.htm">Neuroethics Workgroup</a>, while the E.C. Human Brain Project is using an <a href="https://www.humanbrainproject.eu/2016-ethics">Ethics Map</a> to guide research and development. These and others are grappling with the formidable challenges of developing future neurotechnologies responsibly.</p>
<p>It’s against this backdrop that the NAS workshop sets out to better understand the social and ethical opportunities and challenges emerging from global brain research and neurotechnologies. A goal is to identify ways of ensuring these technologies are developed in ways that are responsive to social needs, desires and concerns. And it comes at a time when brain research is beginning to open up radical new possibilities that were far beyond our grasp just a few years ago.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=540&fit=crop&dpr=1 600w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=540&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=540&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=679&fit=crop&dpr=1 754w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=679&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=679&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Transcranial magnetic stimulation uses a powerful and rapidly changing electrical current to excite neural processes in the brain, similar to direct stimulation with electrodes.</span>
<span class="attribution"><span class="source">Eric Wassermann, M.D.</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>In 2010, for instance, researchers at MIT demonstrated that Transcranial Magnetic Stimulation, or TMS – a noninvasive neurotechnology – <a href="http://news.mit.edu/2010/moral-control-0330">could temporarily alter someone’s moral judgment</a>. Another noninvasive technique called <a href="https://www.wired.com/2014/01/read-zapping-brain/">transcranial Direct Current Stimulation</a> (tDCS) delivers low-level electrical currents to the brain via electrodes on the scalp; it’s being explored as a <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3270156/">treatment for clinical conditions from depression to chronic pain</a> – while already being used in <a href="http://foc.us/">consumer products</a> and by <a href="http://www.wsj.com/articles/the-weird-world-of-brain-hacking-1447096569">do-it-yourselfers</a> to allegedly self-induce changes in mental state and ability.</p>
<p>Crude as current capabilities using TMS and tDCS are, they are forcing people to think about the responsible development and use of technologies which have the ability to potentially change behavior, personality and thinking ability, at the flick of a switch. And the ethical questions they raise are far from straightforward.</p>
<p>For instance, should students be allowed to take exams while using tDCS? Should teachers be able to use tDCS in the classroom? Should TMS be used to prevent a soldier’s moral judgment from interfering with military operations?</p>
<p>These and similar questions grapple with what is already possible. Complex as they are, they pale against the challenges emerging neurotechnologies are likely to raise.</p>
<h2>Preparing now for what’s to come</h2>
<p>As research leads to an increasingly sophisticated and fine-grained understanding of how our brains function, related neurotechnologies are likely to become equally sophisticated. As they do, our abilities to precisely control function, thinking, behavior and personality, will extend far beyond what is currently possible.</p>
<p>To get a sense of the emerging ethical and social challenges such capabilities potentially raise, consider this speculative near-future scenario:</p>
<p>Imagine that in a few years’ time, the UC Berkeley neural dust has been successfully miniaturized and combined with optogenetics, allowing thousands of micrometer-sized devices to be seeded through someone’s brain that are capable of monitoring and influencing localized brain functions. Now imagine this network of neural transceivers is wirelessly connected to an external computer, and from there, to the internet.</p>
<p>Such a network – a crude foreshadowing of science fiction author <a href="http://www.goodreads.com/author/show/5807106.Iain_M_Banks">Iain M. Banks</a>’ “neural lace” (a concept that has <a href="http://www.newsweek.com/elon-musk-neural-lace-ai-artificial-intelligence-465638">already grabbed the attention of Elon Musk</a>) – would revolutionize the detection and treatment of neurological conditions, potentially improving quality of life for millions of people. It would enable external devices to be controlled through thought, effectively integrating networked brains into the Internet of Things. It could help overcome restricted physical abilities for some people. And it would potentially provide unprecedented levels of cognitive enhancement, by allowing people to interface directly with cloud-based artificial intelligence and other online systems. </p>
<p>Think Apple’s Siri or Amazon’s Echo hardwired into your brain, and you begin to get the idea.</p>
<p>Yet this neurotech – which is almost within reach of current technological capabilities – would not be risk-free. These risks could be social – a growing socioeconomic divide perhaps between those who are neuro-enhanced and those who are not. Or they could be related to privacy and autonomy – maybe the ability of employers and law enforcement to monitor, and even alter, thoughts and feelings. The innovation might threaten personal well-being and societal cohesion through (hypothetical) cyber substance abuse, where direct-to-brain code replaces psychoactive substances. It could make users highly vulnerable to neurological cyberattacks.</p>
<p>Of course, predicting and responding to possible future risks is fraught with difficulties, and depends as much on who considers what a risk (and to whom) as it does the capabilities of emerging technologies to do harm. Yet it’s hard to avoid the likely disruptive potential of near-future neurotechnologies. Thus the urgent need to address – as a society – what we want the future of brain technologies to look like.</p>
<p>Moving forward, the ethical and responsible development of emerging brain technologies will require new thinking, along with considerable investment, in what might go wrong, and how to avoid it. Here, we can learn from thinking about responsible and ethical innovation that has come to light around <a href="https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA">recombinant DNA</a>, <a href="https://cns.asu.edu/viri">nanotechnology</a>, <a href="https://experimentearth.org/">geoengineering</a> and other cutting-edge areas of science and technology. </p>
<p>To develop future brain technologies both successfully and responsibly, we need to do so in ways that avoid potential pitfalls while not stifling innovation. We need approaches that ensure ordinary people can easily find out how these technologies might affect their lives – and they must have a say in how they’re used.</p>
<p>All this won’t necessarily be easy – responsible innovation rarely is. But through initiatives like this week’s NAS workshop and others, we have the opportunity to develop brain technologies that are profoundly beneficial, without getting caught up in an ethical minefield.</p><img src="https://counter.theconversation.com/content/65215/count.gif" alt="The Conversation" width="1" height="1" />
<h4 class="border">Disclosure</h4><p class="fine-print"><em><span>Andrew Maynard is a member of the ASU School for the Future of Innovation in Society, which is co-organizing the September 15-16 workshop on responsible innovation in brain science. </span></em></p>How will neurotech evolve? An NAS workshop this week focuses on social and ethical opportunities and challenges we face both now and down the road.Andrew Maynard, Director, Risk Innovation Lab, Arizona State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/628842016-07-22T22:28:40Z2016-07-22T22:28:40ZIt’ll take more than tech for Elon Musk to pull off audacious new Tesla master plan<p>Elon Musk – CEO of Tesla Motors – has just revealed the second part of his <a href="https://www.tesla.com/blog/master-plan-part-deux">master plan for the company</a>. And it’s a doozy. Not content with producing sleek electric cars (which to be fair, was only ever a stepping stone to greater things), Musk wants to fundamentally change how we live our lives. But the road to Musk’s techno-utopia may be rocky.</p>
<p>In 2006, Musk announced his “<a href="https://www.tesla.com/blog/secret-tesla-motors-master-plan-just-between-you-and-me">Secret Tesla Motors Master Plan</a>.” Steps one to three were simple and elegant:</p>
<ul>
<li>Build [a] sports car</li>
<li>Use that money to build an affordable car</li>
<li>Use <em>that</em> money to build an even more affordable car.</li>
</ul>
<p>But cutting through these was a fourth step that had a much stronger social goal in sight: to develop and “provide zero emission electric power generation options.”</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/131627/original/image-20160722-26828-wymqp6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/131627/original/image-20160722-26828-wymqp6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/131627/original/image-20160722-26828-wymqp6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/131627/original/image-20160722-26828-wymqp6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/131627/original/image-20160722-26828-wymqp6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/131627/original/image-20160722-26828-wymqp6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/131627/original/image-20160722-26828-wymqp6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/131627/original/image-20160722-26828-wymqp6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Step One: complete.</span>
<span class="attribution"><a class="source" href="https://www.tesla.com/about">Tesla</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>This desire to change the world for the better is apparent in “part deux” of the master plan. Steps one to three of the new plan are superficially technological goals:</p>
<ul>
<li>Create stunning solar roofs with seamlessly integrated battery storage</li>
<li>Expand the electric vehicle product line to address all major segments</li>
<li>Develop a self-driving capability that is 10X safer than manual via massive fleet learning.</li>
</ul>
<p>Yet underpinning them is a revolutionary vision for transforming society. Elon Musk doesn’t just want to fast-track the transition to renewable energy and self-driving cars – he wants to rewrite the rulebook on how we build a futuristic sustainable society.</p>
<h2>Shifting the culture with new technologies</h2>
<p>This comes through loud and clear in his fourth step in the new master plan. Once there are enough privately owned fully autonomous Teslas on the road, Musk wants to co-opt them into the “Tesla shared fleet.” The concept is as simple as it is audacious: Instead of your Tesla sitting idle in the garage or parking lot when not in use, it would become part of a network of fully autonomous ride-share vehicles – providing driverless lifts for customers on-demand and income for individual vehicles’ owners.</p>
<p>Musk’s concept is a natural fusion of several trends in technology: autonomous vehicles, the internet of things, artificial intelligence, the sharing economy, to name just a few. Technologically, it makes perfect sense – especially when you throw in innovative solar and battery technologies to keep the fleet mobile and energy-efficient.</p>
<p>Yet to succeed, it would require a seismic shift in modern culture – not only in how we live our lives, but also how we think about possessions and ownership. Importantly, this brave new world would have to navigate our existing values and cultural norms.</p>
<p>And here’s the rub: While Musk and his teams have the technical know-how to implement the master plan, it’s not clear yet whether they have the social and political savvy to make it work.</p>
<h2>Engineering the social and political landscape</h2>
<p>We recently saw a hint of what’s to come when a <a href="http://www.freep.com/story/money/cars/2016/07/01/tesla-autopilot-death-highlights-autonomous-risks/86591130/">Tesla driver was killed</a> while his car ran on autopilot. Through a combination of factors, neither the driver nor the car’s autonomous systems managed to detect and avoid a tractor-trailer across the road, leading to a fatal crash. Despite claims of the <a href="http://www.businessinsider.com/elon-musk-says-teslas-autopilot-reduces-car-accidents-by-50-2016-4">autopilot feature reducing the chances of crashes occurring</a>, the incident has got people thinking about the <a href="https://theconversation.com/should-teslas-autopilot-cars-be-allowed-on-public-roads-following-accidents-62495">socially acceptable use</a> of cars that remove responsibility for life-and-death decisions from their occupants.</p>
<p>Just going by the numbers, Tesla’s autopilot technology makes sense. <a href="https://www.tesla.com/blog/misfortune">According to Musk</a>, Tesla occupants using the autopilot feature are statistically safer than those not using it. And the more the feature is used, the better it will get at ensuring the car’s occupants are safe – thanks to the machine learning that is constantly <a href="http://fortune.com/2015/10/16/how-tesla-autopilot-learns/">enhancing the fleet’s auto-capabilities</a>.</p>
<p>But when it comes to technology innovation, numeric logic is often trumped by what we intuitively think and feel is important. As we’ve seen in discussions around autonomous vehicles, how many people are likely to be killed or injured is often less important than <a href="https://theconversation.com/driverless-cars-should-sacrifice-their-passengers-for-the-greater-good-just-not-when-im-the-passenger-61363"><em>who</em> might be killed</a> (whether driver, passengers or pedestrians), how, and who (or what) makes the decisions. Here, even the argument that autonomous vehicles save lives (and are safer than human-driven vehicles) faces an uphill struggle.</p>
<p>That’s not to say that the hurdles Tesla and Musk face in implementing their master plan are insurmountable – they’re not. But as Musk begins to implement part two of the plan, he’s going to need to become increasingly adept at navigating an ever more complex social and political landscape.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/131656/original/image-20160722-26841-wec0yc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/131656/original/image-20160722-26841-wec0yc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/131656/original/image-20160722-26841-wec0yc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=367&fit=crop&dpr=1 600w, https://images.theconversation.com/files/131656/original/image-20160722-26841-wec0yc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=367&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/131656/original/image-20160722-26841-wec0yc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=367&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/131656/original/image-20160722-26841-wec0yc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=462&fit=crop&dpr=1 754w, https://images.theconversation.com/files/131656/original/image-20160722-26841-wec0yc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=462&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/131656/original/image-20160722-26841-wec0yc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=462&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Developing the technology may be the easy part.</span>
<span class="attribution"><a class="source" href="http://www.shutterstock.com/pic.mhtml?id=109280882">Road image via www.shutterstock.com.</a></span>
</figcaption>
</figure>
<h2>Innovating through the wicked problems</h2>
<p>Musk appears to be aware of this. In 2015, through the Future of Life Institute (FLI), he backed a <a href="http://futureoflife.org/2015/10/12/11m-ai-safety-research-program-launched/">US$11 million research program</a> to support the robust and beneficial development of artificial intelligence – an important technology for carrying out his master plan. Included in this initiative is the <a href="https://www.fhi.ox.ac.uk/research/research-areas/strategic-centre-for-artificial-intelligence-policy/">Strategic Artificial Intelligence Research Center</a> – a collaboration between Oxford and Cambridge universities that aims to help develop policies to minimize the risks and maximize the benefits from artificial intelligence.</p>
<p>The tea leaves suggest Musk is thinking broadly about what it takes to develop societally successful technologies. Yet to succeed in his master plan, I suspect he’ll need to think more broadly still – and fast.</p>
<p>Part of the issue is that the <a href="https://theconversation.com/responsible-development-of-new-technologies-critical-in-complex-connected-world-38195">relationship between technology and society</a> is <a href="https://www.weforum.org/agenda/2016/01/mastering-the-social-side-of-the-fourth-industrial-revolution-an-essential-reading-list/">highly complex and constantly shifting</a>. Successfully developing transformative technologies while benefiting society as a whole leads to what some are fond of calling “wicked” problems – problems that are so slippery they change and shift in response to attempts to solve them. (If you want an example, just look at the introduction and use of genetically modified foods.) Changing society through new solar technologies, autonomous vehicles and driverless car-sharing will take a lot more than smart technologies, safety research and policy recommendations.</p>
<p>To navigate what is a wickedly complex social and political landscape, Musk is going to need to make friends with people in a whole bunch of new areas, from <a href="http://rdcu.be/jpWS">responsible innovation</a> and the <a href="http://issues.org/31-4/coordinating-technology-governance/">governance of emerging technologies</a>, to <a href="http://issues.org/27-1/p_sclove/">technology assessment</a> and <a href="http://rdcu.be/jpWT">risk innovation</a> – an emerging approach to thinking and acting differently on risk. Here, I’m admittedly a little biased, as this is what we do in Arizona State University’s <a href="https://sfis.asu.edu/">School for the Future of Innovation in Society</a>. But we’re just a part of a growing global community of experts who are developing the know-how to ensure that emerging technological capabilities are developed responsibly and successfully.</p>
<p>To be sure, Elon Musk’s master plan for Tesla Motors is nothing if not inspired. It’s visionary, elegant, likely to improve lives and technologically within reach. Yet without coming to grips with the increasingly complex social and political challenges it faces, the plan runs the risk of not getting much further than the metaphorical paper it’s written on.</p>
<p>And that would be a shame. Because – implemented responsibly – Musk’s vision could be a game changer for how we go about building a more sustainable world.</p><img src="https://counter.theconversation.com/content/62884/count.gif" alt="The Conversation" width="1" height="1" />
<h4 class="border">Disclosure</h4><p class="fine-print"><em><span>Andrew Maynard is a professor in the Arizona State University School for the Future of Innovation in Society. He is a member of the World Economic Forum Global Agenda "metacouncil" on emerging technologies. </span></em></p>The technological goals are lofty. But fitting the new tech into the social and political landscape might pose the bigger challenge.Andrew Maynard, Director, Risk Innovation Lab, Arizona State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/613492016-06-23T10:03:56Z2016-06-23T10:03:56ZHow risky are the World Economic Forum’s top 10 emerging technologies for 2016?<figure><img src="https://images.theconversation.com/files/127834/original/image-20160622-7165-egq4no.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Welcome to the future....</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/pic.mhtml?id=335493716">Robot via www.shutterstock.com.</a></span></figcaption></figure><p>Take an advanced technology. Add a twist of fantasy. Stir well, and watch the action unfold.</p>
<p>It’s the perfect recipe for a Hollywood tech-disaster blockbuster. And clichéd as it is, it’s the scenario that we too often imagine for emerging technologies. Think superintelligent machines, lab-bred humans, the ability to redesign whole species – you get the picture.</p>
<p>The reality, of course, is that the real world is usually far more mundane: less “zombie apocalypse” and more “<a href="https://www.washingtonpost.com/news/the-intersect/wp/2016/03/24/the-internet-turned-tay-microsofts-fun-millennial-ai-bot-into-a-genocidal-maniac/">teens troll supercomputer; teach it bad habits</a>.”</p>
<p>Looking through this year’s crop of <a href="https://www.weforum.org/agenda/2016/06/top-10-emerging-technologies-2016">Top Ten Emerging Technologies</a> from the World Economic Forum (<a href="https://www.weforum.org">WEF</a>), this is probably a good thing. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/127773/original/image-20160622-7165-18f8nlv.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/127773/original/image-20160622-7165-18f8nlv.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/127773/original/image-20160622-7165-18f8nlv.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=561&fit=crop&dpr=1 600w, https://images.theconversation.com/files/127773/original/image-20160622-7165-18f8nlv.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=561&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/127773/original/image-20160622-7165-18f8nlv.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=561&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/127773/original/image-20160622-7165-18f8nlv.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=704&fit=crop&dpr=1 754w, https://images.theconversation.com/files/127773/original/image-20160622-7165-18f8nlv.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=704&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/127773/original/image-20160622-7165-18f8nlv.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=704&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">World Economic Forum</span></span>
</figcaption>
</figure>
<p>Since 2012, I’ve been part of a group of WEF advisers who help compile an <a href="http://wef.ch/emergingtech16">annual list of emerging technologies</a> that are poised to transform our lives. This year’s list includes <a href="https://www.weforum.org/agenda/2016/06/autonomous-vehicles">autonomous vehicles</a>, <a href="https://www.weforum.org/agenda/2016/06/the-blockchain">blockchain</a> (the technology behind BitCoin), <a href="https://www.weforum.org/agenda/2016/06/next-generation-batteries">next-generation batteries</a> and a number of other technologies that are beginning to make their mark.</p>
<p>The list is aimed at raising awareness around potentially transformative technologies so that investors, businesses, regulators and others know what’s coming down the pike. It’s also an opportunity for us to think through what might go wrong as the technologies mature.</p>
<p>Admittedly, some of these technologies would stretch the imagination of the most creative of apocalyptic screenwriters – it’ll be a while, I suspect, before “<a href="https://www.weforum.org/agenda/2016/06/two-dimensional-materials">Graphene</a> Apocalypse” or “Day of the <a href="https://www.weforum.org/agenda/2016/06/perovskite-solar-cells">Perovskite Cell</a>” hit the silver screen. But others show considerable potential for a summer scare-flick, including “brain-controlling” <a href="https://www.weforum.org/agenda/2016/06/optogenetics">optogenetics</a> and the mysterious sounding “<a href="https://www.weforum.org/agenda/2016/06/nanosensors-and-the-internet-of-nano-things">Internet of Nano Things</a>.” </p>
<p>Putting Hollywood fantasies aside, though, it’s hard to predict the plausible downsides of emerging technologies. Yet this is exactly what is needed if we’re to ensure they’re developed responsibly in the long run.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/127791/original/image-20160622-7165-17ddz7z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/127791/original/image-20160622-7165-17ddz7z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/127791/original/image-20160622-7165-17ddz7z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=329&fit=crop&dpr=1 600w, https://images.theconversation.com/files/127791/original/image-20160622-7165-17ddz7z.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=329&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/127791/original/image-20160622-7165-17ddz7z.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=329&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/127791/original/image-20160622-7165-17ddz7z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=414&fit=crop&dpr=1 754w, https://images.theconversation.com/files/127791/original/image-20160622-7165-17ddz7z.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=414&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/127791/original/image-20160622-7165-17ddz7z.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=414&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Sometimes we need to head back to the lab when dealing with new technology challenges…like self-driving cars mixing it up with the rest of us.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/austintx/21716708788">Alan</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<h2>Tech problems, tech solutions</h2>
<p>It’s tempting to ask what concrete harm technologies like those in this year’s top ten could cause, then simply figure out how to “fix” the problems. For instance, how do we ensure that “logical” self-driving cars safely share the road with less “logical” humans? Or how do we prevent bacteria that are genetically programmed to produce commercial chemicals from polluting the environment? These are risks that lend themselves to technological solutions.</p>
<p>But focusing on such questions can mask much more subtle dangers inherent in emerging technologies, threats that aren’t as amenable to technological fixes, and that we all too easily overlook. </p>
<p>For example, being infused with internet-connected nano-sensors that reveal your most intimate biological details to the world could present social and psychological risks that can’t be solved by technology alone.</p>
<p>Similar concerns arise around “<a href="https://www.weforum.org/agenda/2016/06/the-open-ai-ecosystem">open artificial intelligence (AI) ecosystems</a>” – the next step up from systems like Amazon’s Echo, Apple’s Siri and Microsoft’s Cortana. Combining “listening” devices, cloud computing and the Internet of Things, machines are increasingly combining the capacity to understand normal conversation with the ability to take action on what they hear. This is a truly transformative technology platform. But what happens when these AI ecosystems begin to listen in on private conversations and share them with others? Or independently decide what’s best for you? These possibilities raise ethical and moral concerns that aren’t easily addressed solely by tech solutions.</p>
<h2>Expanding our conception of what we value</h2>
<p>One way to tease out the subtler possible impacts of emerging technologies is to think of risk as a threat to something of value – an idea that’s embedded in the somewhat new concept of <a href="https://theconversation.com/thinking-innovatively-about-the-risks-of-tech-innovation-52934">Risk Innovation</a>. This “value” depends on what’s important to different individuals, communities and organizations.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/127802/original/image-20160622-7175-1hk40vu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/127802/original/image-20160622-7175-1hk40vu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/127802/original/image-20160622-7175-1hk40vu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/127802/original/image-20160622-7175-1hk40vu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/127802/original/image-20160622-7175-1hk40vu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/127802/original/image-20160622-7175-1hk40vu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/127802/original/image-20160622-7175-1hk40vu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/127802/original/image-20160622-7175-1hk40vu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">We’re used to thinking about risks like human health hazards when it comes to new tech such as perovskite solar cells.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/imec_int/22098135423">imec_ int</a></span>
</figcaption>
</figure>
<p>Health, wealth and a sustainable environment are clearly important “things of value” in this context, as are livelihood, and food, water and shelter. Threats to any of these align with more conventional approaches to risk – a health risk, for instance, can be understood as something that threatens to make you sick, and an environmental risk as something that threatens the integrity of the environment.</p>
<p>But we can also extend the idea of a threat to something we value to less conventional types of risk: threats to self-worth, for instance, or culture, sense of security, equity, even deeply held beliefs. </p>
<p>These touch on things that define us as individuals and communities, and get to the heart of what gives us a sense of purpose and belonging. In this way, relevant threats might include inequity or an eroded sense of self-worth from new tech taking away your job. Or anxiety over who knows what about you, and how they might use it. Or fear of becoming socially marginalized by the use of new technologies. Or even dread over sacrosanct beliefs – such as the sanctity of life, or the right to free choice – being challenged by emerging technological capabilities.</p>
<p>Threats like these aren’t easy to capture. Yet they have a profound impact on people – and as a consequence, on how new technologies are developed and used. Thinking more broadly about risk as a threat to value is especially helpful to understanding the possible undesired consequences of tech innovation, and how they might be avoided.</p>
<h2>Risks of missing out on new technologies</h2>
<p>This approach to risk also opens the door to considering the potential risks of <em>not</em> developing a technology. Beyond existing value, future value is also important to most people and organizations.</p>
<p>For instance, autonomous vehicles could eventually prevent <a href="http://www.theatlantic.com/technology/archive/2015/09/self-driving-cars-could-save-300000-lives-per-decade-in-america/407956/">tens of thousands of road deaths</a>; optogenetics – using genetic engineering and light to manipulate brain cell activity – <a href="http://www.scientificamerican.com/article/revolutionary-neuroscience-technique-slated-for-human-clinical-trials/">could help cure or manage</a> debilitating neurological diseases; and materials like graphene could ensure more people than ever have access to <a href="https://www.weforum.org/agenda/2015/07/can-graphene-make-the-worlds-water-clean/">cheap clean water</a>. Not developing these technologies potentially threatens things that many people hold to be extremely valuable.</p>
<p>Of course, on the flip side, these technologies may also threaten what is important to some. Self-driving cars might undermine human responsibility, not to mention the enjoyment of driving. Optogenetics raise the possibility of involuntary neurological control. And graphene <a href="http://spectrum.ieee.org/nanoclast/at-work/test-and-measurement/should-we-worry-about-graphene-oxide-in-our-water">might be harmful to some ecosystems</a> if released into the environment in sufficient quantities.</p>
<p>By considering how emerging technologies potentially interact with what we consider to be important, it becomes easier to weigh the possible downsides of developing them – or at least developing them without due consideration – against those of either impeding their development, or not developing them at all.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/127804/original/image-20160622-7196-4d64gz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/127804/original/image-20160622-7196-4d64gz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/127804/original/image-20160622-7196-4d64gz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/127804/original/image-20160622-7196-4d64gz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/127804/original/image-20160622-7196-4d64gz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/127804/original/image-20160622-7196-4d64gz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/127804/original/image-20160622-7196-4d64gz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/127804/original/image-20160622-7196-4d64gz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Broadening the values we consider to be at risk from new technologies opens up a new way of thinking about responsible development.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/petruzzophoto/7337301522">William Petruzzo</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<h2>The greatest risk of all</h2>
<p>What emerges when risk is approached as a threat to value is a much richer way of thinking about how emerging technologies might affect people, communities and organizations, and how they can be developed responsibly. It’s an approach that forces us to realize that the consequences of developing new technologies are complex, and touch people in different ways – not all of them for the better. It’s not necessarily a comfortable reconceptualization – but looking at risk from this new angle does pave the way for technologies that benefit many people and disadvantage few, rather than the other way round.</p>
<p>In reality, unlike the simplicity of Hollywood blockbusters, the risks associated with emerging technologies are rarely clear-cut, and almost never straightforward. Yet they nevertheless exist. Every one of this year’s <a href="https://www.weforum.org/agenda/2016/06/top-10-emerging-technologies-2016">World Economic Forum top 10 emerging technologies</a> has the potential to threaten something of value to some person or organization – whether undermining an established technology or business model, jeopardizing jobs, or influencing health and well-being. </p>
<p>These dangers are context-specific, often intertwined with each other, sometimes conflicting, and often balanced by the risks of not developing the technology. Yet understanding and addressing them is essential to realizing the long-term benefits that these technologies offer.</p>
<p>And here, perhaps, is the greatest risk – that either in our enthusiasm for developing these technologies, or our Hollywood-inspired fears of potential consequences, we lose sight of the value of developing new technologies that make our world a better place, not just a different one.</p><img src="https://counter.theconversation.com/content/61349/count.gif" alt="The Conversation" width="1" height="1" />
<h4 class="border">Disclosure</h4><p class="fine-print"><em><span>Andrew Maynard co-chairs the World Economic Forum Global Agenda Council on Nanotechnology, and is a member of the WEF Meta-Council on Emerging Technologies.</span></em></p>A list of 10 new technologies poised to transform our lives provides a chance to think about any related risks sooner than later. Reconceptualizing “value” changes what responsible development means.Andrew Maynard, Director, Risk Innovation Lab, Arizona State UniversityLicensed as Creative Commons – attribution, no derivatives.