tag:theconversation.com,2011:/us/topics/responsible-innovation-31243/articlesResponsible innovation – The Conversation2023-07-10T21:17:55Ztag:theconversation.com,2011:article/2091222023-07-10T21:17:55Z2023-07-10T21:17:55Z‘Responsibility washing’ is as bad for health innovation as greenwashing is for sustainability. Here’s how to stop it.<figure><img src="https://images.theconversation.com/files/536677/original/file-20230710-14032-s67dyv.jpg?ixlib=rb-1.1.0&rect=17%2C26%2C5973%2C2964&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Giving innovators the right tools and guidance can set a new path for responsible health innovation for products from concept to disposal.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>While Apple CEO Tim Cook believes “<a href="https://www.popularmechanics.com/technology/a40823587/guest-editor-tim-cook/">the future is responsible innovation</a>,” management scholars <a href="https://doi.org/10.1142/8903">warned of a trend of “responsibility washing” nearly a decade ago</a>. Time, unfortunately, has proven them right. Over the past few months, generative AI tools such as ChatGPT have become the latest technology to raise concerns <a href="https://theconversation.com/the-ai-arms-race-highlights-the-urgent-need-for-responsible-innovation-200218">of responsibility washing</a>. </p>
<p>Much like <a href="https://www.canada.ca/en/competition-bureau/news/2022/01/be-on-the-lookout-for-greenwashing.html">greenwashing tarnished sustainability efforts</a> before the establishment of standards and norms (such as the <a href="https://www.iso.org/iso-14001-environmental-management.html">ISO 14001 environmental management standards</a>), responsible innovation (RI) is threatened by responsibility washing. </p>
<p>Just as greenwashing creates a misleading guise of environmental responsibility for a product or organization, responsibility washing creates the impression of RI without making discernible efforts to address important social responsibility issues like health equity, affordability and sustainability.</p>
<p>RI aims to <a href="https://econpapers.repec.org/bookchap/wsiwsbook/8903.htm">reduce the negative impacts of innovations on users and society</a> by <a href="https://doi.org/10.1016/j.respol.2013.05.008">transforming how innovations are developed</a>. But the lack of a standard definition of RI, practical tools or clear assessment criteria and methods can lead to responsibility washing an innovation, whether intentionally or not.</p>
<p>New health technologies raise complex economic, social and environmental risks and harms in addition to clinical safety and efficacy concerns. That’s why our research team of RI experts tackles responsibility washing in this sector.</p>
<h2>How does responsible innovation apply to the health sector?</h2>
<p>Readers may wonder: Why does the health sector need RI? Health innovations are highly regulated to limit risks and harms. Their purpose is to save lives and make people feel better. Good technologies are developed with clinicians and patients to better meet their needs. Aren’t they already responsible?</p>
<p>In 2015, our team of Canadian and Brazilian researchers set out to better understand how RI applies to the health sector in both mature and emerging economies. This included: </p>
<ul>
<li>conducting a comprehensive review of research; </li>
<li>interviewing over 85 experts in fields such as entrepreneurship, engineering, industrial design and health technology assessment; </li>
<li>conducting a four-year case study with small- and medium enterprises, and </li>
<li>implementing a collaborative process with experts to derive practical guidance. </li>
</ul>
<p>This research led to the Responsible Innovation in Health (RIH) framework, which aims to foster “<a href="https://doi.org/10.1007/978-981-19-3151-2">high quality and safe health innovations that also: strengthen health system equity, provide more value to users, use fewer resources, are good for the environment, and are economically viable</a>.” </p>
<p>RIH brings together five areas of value, each defined by a specific goal and responsibility attributes (or elements) that extend beyond clinical safety and efficacy standards: population health value, health system value, economic value, organizational value and environmental value.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/536137/original/file-20230706-15-mrj726.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Text chart defining five values: Population health value, Health system value, Economic value, Organizational value and Environmental value." src="https://images.theconversation.com/files/536137/original/file-20230706-15-mrj726.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/536137/original/file-20230706-15-mrj726.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/536137/original/file-20230706-15-mrj726.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/536137/original/file-20230706-15-mrj726.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/536137/original/file-20230706-15-mrj726.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/536137/original/file-20230706-15-mrj726.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/536137/original/file-20230706-15-mrj726.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Responsible innovation includes five values, each defined by a specific goal and responsibility attribute that extend beyond clinical safety and efficacy standards.</span>
<span class="attribution"><span class="source">(In Fieri)</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>In addition to clearly defining RI for the health sector, our team developed a tool that informs design decisions, and an assessment tool that measures the degree of responsibility of a health innovation. </p>
<p>The <a href="https://doi.org/10.1007/978-981-19-3151-2_3">RIH Design Brief</a> is a practical tool for innovators that explains how to integrate the nine RIH responsibility attributes throughout the innovation’s lifecycle: ideation, design, development, commercialization and end-of-life disposal. </p>
<p>The <a href="https://doi.org/10.1007/978-981-19-3151-2_8">RIH Assessment Tool</a> then measures the extent to which each responsibility attribute is fulfilled. This tool was <a href="http://dx.doi.org/10.1016/j.hlpt.2018.10.007">validated by international experts</a> and was confirmed <a href="https://doi.org/10.34172%2Fijhpm.2020.34">reliable</a> through an <a href="https://www.statology.org/inter-rater-reliability/">inter-rater agreement assessment</a>. </p>
<p>Both the design brief and the assessment tool rely on the four-level rating scales for each attribute, where “A” represents a high level of responsibility and “D” indicates that there are no particular signs of responsibility. </p>
<p>For example, let’s take a closer look at the rating scales of inclusiveness and of eco-responsibility — two issues rarely addressed by health technology assessments. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/535824/original/file-20230705-19007-uvqkob.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/535824/original/file-20230705-19007-uvqkob.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=381&fit=crop&dpr=1 600w, https://images.theconversation.com/files/535824/original/file-20230705-19007-uvqkob.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=381&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/535824/original/file-20230705-19007-uvqkob.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=381&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/535824/original/file-20230705-19007-uvqkob.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=478&fit=crop&dpr=1 754w, https://images.theconversation.com/files/535824/original/file-20230705-19007-uvqkob.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=478&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/535824/original/file-20230705-19007-uvqkob.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=478&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The assessment tool uses a four-level rating scale, where A represents a high level of responsibility and D indicates that there are no particular signs of responsibility.</span>
<span class="attribution"><span class="source">(In Fieri)</span></span>
</figcaption>
</figure>
<p>Inclusiveness is measured by assessing whether the innovation team formally consulted with a diverse group of people who may be affected by the technology, and explained how that group’s input was integrated into the design. Formal methods include both consultation (such as surveys) and engagement methods (such as round tables). </p>
<p>Eco-responsibility is measured by examining the number of key lifecycle stages where the innovation team applied eco-responsible practices, <a href="https://doi.org/10.34172%2Fijhpm.2020.34">including raw material sourcing, manufacturing, distribution, use and disposal</a>. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/535825/original/file-20230705-19-o89vfm.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/535825/original/file-20230705-19-o89vfm.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=325&fit=crop&dpr=1 600w, https://images.theconversation.com/files/535825/original/file-20230705-19-o89vfm.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=325&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/535825/original/file-20230705-19-o89vfm.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=325&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/535825/original/file-20230705-19-o89vfm.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=408&fit=crop&dpr=1 754w, https://images.theconversation.com/files/535825/original/file-20230705-19-o89vfm.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=408&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/535825/original/file-20230705-19-o89vfm.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=408&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Eco-responsibility is measured by looking at the number of lifecycle stages where the innovation team applied eco-responsible practices, such as raw material sourcing, manufacturing, distribution, use and disposal.</span>
<span class="attribution"><span class="source">(In Fieri)</span></span>
</figcaption>
</figure>
<h2>Building a responsible health innovation path</h2>
<p>By integrating RIH attributes during the concept, design and development stages of an innovation, innovators can ensure that the key economic, social and environmental responsibility issues raised by their health innovation are identified and tackled in a measurable way. </p>
<p>This is a complex process, so the RIH Design Brief offers practical guidance with an original design-thinking tool called the <a href="https://doi.org/10.1007/978-981-19-3151-2_2">Responsible Design Compass</a>. A multidisciplinary <a href="https://link.springer.com/chapter/10.1007/978-981-19-3151-2_6">toolbox</a> also indicates where innovators can work towards RIH goals using existing tools, such as <a href="https://www.fda.gov/regulatory-information/search-fda-guidance-documents/applying-human-factors-and-usability-engineering-medical-devices">FDA’s human factors engineering</a>, <a href="https://www.designkit.org/methods/photojournal.html">IDEO’s photojournal</a> or <a href="https://www.bcorporation.net/en-us/programs-and-tools/b-impact-assessment/">B Corp’s impact assessment</a>.</p>
<p>Once the innovation is completed and ready for use, policymakers, health-care managers, investors, technology transfer offices, philanthropic foundations and patient groups can apply the RIH Assessment Tool to inform investment, purchasing or implementation decisions.</p>
<p>The goal of RIH is to prevent responsibility washing by setting a new path for responsible health innovation. It provides the tools and practical guidance for innovators to be accountable to users and society from ideation through end-of-life disposal.</p><img src="https://counter.theconversation.com/content/209122/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Pascale Lehoux receives funding from the Canadian Institutes of Health Research (CIHR). Her research group is supported by the Fonds de la recherche du Québec-Santé (FRQ-S).</span></em></p><p class="fine-print"><em><span>Hudson Silva and Lysanne Rivard do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>New health technologies raise complex economic, social, environmental and safety concerns. Just as greenwashing tarnished sustainability efforts, ‘responsibility washing’ threatens health innovation.Lysanne Rivard, Senior Research Advisor, Center for Public Health Research, Université de MontréalHudson Silva, Senior Research Analyst, Fieri Research Program on Responsible Innovation in Health, Université de MontréalPascale Lehoux, Professor of Health Management, Evaluation and Policy, Université de MontréalLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2052322023-05-09T01:03:59Z2023-05-09T01:03:59ZAustralia has a National Quantum Strategy. What does that mean?<figure><img src="https://images.theconversation.com/files/524890/original/file-20230508-197326-ujrjbd.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4080%2C2021&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/ERdTJQTtsbE">Dynamic Wang / Unsplash</a></span></figcaption></figure><p>Imagine a world where computers can solve complex problems in seconds, making our current devices seem like mere typewriters. These supercomputers would revolutionise industries, create new medicines, and even help combat climate change. </p>
<p>Imagine as well we could observe the workings of our own bodies in unprecedented detail, and communicate online without fear of hacking. This may be starting to <a href="https://thequantuminsider.com/2021/07/09/quantum-technology-in-science-fiction-popular-culture/">sound like a sci-fi novel</a>, but quantum technologies have the potential to make it all real.</p>
<p>Australia has just unveiled its first <a href="https://www.industry.gov.au/publications/national-quantum-strategy">National Quantum Strategy</a>. The strategy aims to make Australia “a leader of the global quantum industry” by 2030, by encouraging research, applications and commercialisation. </p>
<p>So what does that actually mean?</p>
<h2>What are quantum technologies?</h2>
<p>Quantum technologies build on the science of quantum mechanics, which studies the behaviour of subatomic particles at a microscopic scale. </p>
<p>At this level, particles behave strangely: they can exist in multiple states simultaneously (called superposition), and be “entangled” with each other. When particles are entangled, their properties are linked together regardless of the distance between them. </p>
<p>Quantum technologies make use of these counterintuitive properties to achieve things that might otherwise be impossible. Three main areas of quantum technology are gaining the most attention: quantum sensing, quantum communications, and quantum computing.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/explainer-quantum-computation-and-communication-technology-7892">Explainer: quantum computation and communication technology </a>
</strong>
</em>
</p>
<hr>
<p>Quantum sensing can detect tiny changes in the environment, measuring things like gravity, magnetic fields and temperature with incredible accuracy. This technology could have a huge impact on industries like healthcare, mining and navigation. </p>
<p>For instance, quantum sensors may be able to help us <a href="https://phys.org/news/2020-11-quantum-nanodiamonds-disease-earlier.html">detect early signs of diseases in our bodies</a> and <a href="https://www.australianmining.com.au/breakthrough-technologies-for-mineral-exploration-fetch-billions/">locate valuable minerals hidden deep underground</a>.</p>
<p>Unlike traditional computers, which store and process information using bits (zeroes and ones), quantum computers use “qubits”, which can exist as zeroes, ones, or combinations of both at once. </p>
<figure class="align-center ">
<img alt="A photo of the brass coils and circuitry of a quantum computer." src="https://images.theconversation.com/files/524999/original/file-20230508-195023-bjjc4v.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/524999/original/file-20230508-195023-bjjc4v.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/524999/original/file-20230508-195023-bjjc4v.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/524999/original/file-20230508-195023-bjjc4v.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/524999/original/file-20230508-195023-bjjc4v.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/524999/original/file-20230508-195023-bjjc4v.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/524999/original/file-20230508-195023-bjjc4v.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Quantum computers may be able to crack problems that are currently impossible to solve.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>Fully functioning quantum computers don’t exist yet – but scientists believe they will be able to perform certain kinds of calculations at lightning speed, solving <a href="https://www.abc.net.au/news/science/2021-08-14/australian-research-puts-larger-quantum-computers-within-reach/100371544">some problems</a> that would take today’s computers millions of years to crack. This would have <a href="https://hbr.org/2021/07/quantum-computing-is-coming-what-can-it-do">huge implications</a> for fields including cryptography, AI, drug discovery, and climate modelling.</p>
<p>Researchers are also working on <a href="https://www.newscientist.com/article/2253448-secure-quantum-communications-network-is-the-largest-of-its-kind/">super-secure quantum communication networks</a> that are almost impossible to hack or eavesdrop on. On networks like these, attempts to intercept messages would be <a href="https://www.bcg.com/publications/2023/are-you-ready-for-quantum-communications">instantly detectable</a> to the sender and the receiver.</p>
<h2>The quantum race</h2>
<p>Australia’s National Quantum Strategy sees us join other countries and regions, racing to unlock the potential of quantum technology and dominate the market. <a href="https://www.forbes.com/sites/forbestechcouncil/2020/10/05/what-the-us-investment-in-quantum-computing-means-for-security/">The United States</a>, <a href="https://www.newscientist.com/article/mg25233652-000-2021-in-review-jian-wei-pan-leads-chinas-quantum-computing-successes/">China</a>, and <a href="https://digital-strategy.ec.europa.eu/en/policies/quantum-technologies-flagship">Europe</a> are investing billions of dollars in quantum research and development. </p>
<p>If Australia wants to keep up, it needs to act now. But why is keeping up so important?</p>
<p>First, we don’t want to be left behind in the rapidly growing quantum technology industry. <a href="https://www.innovationaus.com/australias-quantum-opportunity-upgraded-to-6-billion/">According to CSIRO projections</a>, the quantum industry could be worth A$4.6 billion by the end of the decade. By 2045, it might employ as many people as the oil and gas sector does today, with revenues of $6 billion and 19,400 direct jobs.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/better-ai-unhackable-communication-spotting-submarines-the-quantum-tech-arms-race-is-heating-up-179482">Better AI, unhackable communication, spotting submarines: the quantum tech arms race is heating up</a>
</strong>
</em>
</p>
<hr>
<p>As other nations push forward, Australia risks missing out on the potential economic benefits. We could also lose talented workers to countries that are investing more in quantum research. Projects like the ambitious attempt to <a href="https://www.smh.com.au/national/australia-sets-ambitious-goal-to-build-first-complete-quantum-computer-20230502-p5d51r.html">build the world’s first complete quantum computer</a> aim to provide local opportunities and funding alongside their top-line goals.</p>
<p>Moreover, Australia has a responsibility to ensure quantum technologies are developed and used ethically, and their <a href="https://www.weforum.org/agenda/2022/09/organizations-protect-quantum-computing-threat-cybersecurity/">risks</a> managed.</p>
<p>For example, quantum computers could enable hackers to <a href="https://www2.deloitte.com/uk/en/insights/topics/cyber-risk/quantum-computing-ethics-risks.html">break existing encryption protocols</a>, leaving internet services vulnerable. Data harvesting by companies is already a concern, and quantum computing could exacerbate this issue. Even <a href="https://www2.deloitte.com/us/en/insights/industry/public-sector/the-impact-of-quantum-technology-on-national-security.html">national security could be jeopardised</a> by quantum decryption.</p>
<h2>Responsible innovation</h2>
<p>To make the most of the power of quantum technology, we need to be proactive, focus on the public good, and think about it from many perspectives to ensure “<a href="https://research.csiro.au/ri/">responsible innovation</a>”.</p>
<p>Collaboration and broad dialogue will be necessary. Conversations between experts in fields like quantum computing, cybersecurity, ethics and social sciences – perhaps via regular conferences or workshops – will help us tackle the technical and ethical risks.</p>
<p>Engaging with society and focusing on the public good will also be essential. The public must be involved in discussions to ensure new quantum technologies benefit everyone, not just businesses. Town hall meetings, public forums or online chats can help scientists, policymakers and citizens share views.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-second-quantum-revolution-is-almost-here-we-need-to-make-sure-it-benefits-the-many-not-the-few-161878">The 'second quantum revolution' is almost here. We need to make sure it benefits the many, not the few</a>
</strong>
</em>
</p>
<hr>
<p>And we must make sure that “responsibility” always sits right alongside “innovation” in quantum technologies. Organisations working on quantum tech could have “responsible quantum committees” to address risks and involve stakeholders, ensuring responsible innovation in quantum technology.</p>
<p>Success in quantum technology will be all about striking the right balance: encouraging both innovation and responsibility. By investing in quantum technology and working together to ensure its responsible development, Australia can continue to be a leader in scientific innovation while benefiting from these emerging technologies’ transformative potential. </p>
<p>Australia’s National Quantum Strategy is a step in this direction.</p><img src="https://counter.theconversation.com/content/205232/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jarryd Daymond is an associate researcher on a project funded by the Medical Research Future Fund (MRFF) Targeted Translation Research Accelerator (TTRA). </span></em></p>Countries around the world are racing to develop quantum technologies for computing, sensing and communication. Australia is trying not to get left behind.Jarryd Daymond, Lecturer, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1057142018-11-15T11:45:18Z2018-11-15T11:45:18ZSci-fi movies are the secret weapon that could help Silicon Valley grow up<figure><img src="https://images.theconversation.com/files/244833/original/file-20181109-116820-1dd6y55.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">If you don't want to be facing down an angry dinosaur, pay attention to what happens on screen.</span> <span class="attribution"><a class="source" href="https://www.imdb.com/title/tt0107290/mediaviewer/rm2687618048">Universal Pictures</a></span></figcaption></figure><p>If there’s one line that stands the test of time in Steven Spielberg’s 1993 classic “<a href="https://www.imdb.com/title/tt0107290/">Jurassic Park</a>,” it’s probably Jeff Goldblum’s exclamation, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” </p>
<p>Goldblum’s character, Dr. Ian Malcolm, was warning against the hubris of naively tinkering with dinosaur DNA in an effort to bring these extinct creatures back to life. Twenty-five years on, his words are taking on new relevance as a growing number of scientists and companies are grappling with how to <a href="https://www.theguardian.com/technology/2018/oct/12/tech-humanities-misinformation-philosophy-psychology-graduates-mozilla-head-mitchell-baker">tread the line between “could” and “should”</a> in areas ranging from <a href="http://science.sciencemag.org/content/362/6414/527">gene editing</a> and <a href="https://www.wsj.com/articles/meet-the-scientists-bringing-extinct-species-back-from-the-dead-1539093600">real-world “de-extinction”</a> to <a href="https://www.bloomberg.com/news/articles/2018-10-19/biohackers-are-implanting-everything-from-magnets-to-sex-toys">human augmentation</a>, <a href="https://www.vox.com/future-perfect/2018/10/16/17978596/stephen-hawking-ai-climate-change-robots-future-universe-earth">artificial intelligence</a> and many others. </p>
<p>Despite growing concerns that powerful emerging technologies could lead to unexpected and wide-ranging consequences, innovators are struggling with how to develop beneficial new products while being socially responsible. Part of the answer could lie in <a href="https://mango.bz/books/films-from-the-future-by-andrew-maynard-458-b">watching more science fiction movies</a> like “Jurassic Park.”</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/244822/original/file-20181109-36763-1x9u650.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/244822/original/file-20181109-36763-1x9u650.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/244822/original/file-20181109-36763-1x9u650.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=337&fit=crop&dpr=1 600w, https://images.theconversation.com/files/244822/original/file-20181109-36763-1x9u650.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=337&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/244822/original/file-20181109-36763-1x9u650.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=337&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/244822/original/file-20181109-36763-1x9u650.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/244822/original/file-20181109-36763-1x9u650.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/244822/original/file-20181109-36763-1x9u650.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Just because you can….</span>
<span class="attribution"><a class="source" href="https://www.throwbacks.com/jeff-goldblum-talks-jurassic-park/">Universal Pictures</a></span>
</figcaption>
</figure>
<h2>Hollywood lessons in societal risks</h2>
<p>I’ve long been interested in how innovators and others can better understand the increasingly complex landscape around the <a href="https://scholar.google.com/citations?user=b8NhWc4AAAAJ&hl=en&oi=ao">social risks and benefits associated with emerging technologies</a>. Growing concerns over the impacts of tech on jobs, privacy, security and even the ability of people to live their lives without undue interference highlight the need for new thinking around how to innovate responsibly. </p>
<p>New ideas require creativity and imagination, and a willingness to see the world differently. And this is where science fiction movies can help.</p>
<p>Sci-fi flicks are, of course, notoriously unreliable when it comes to accurately depicting science and technology. But because their plots are often driven by the intertwined relationships between people and technology, they can be remarkably insightful in revealing social factors that affect successful and responsible innovation. </p>
<p>This is clearly seen in “Jurassic Park.” The movie provides a surprisingly good starting point for thinking about the pros and cons of modern-day genetic engineering and the growing interest in <a href="https://news.nationalgeographic.com/news/2013/13/130310-extinct-species-cloning-deextinction-genetics-science/">bringing extinct species back from the dead</a>. But it also opens up conversations around the nature of complex systems that involve both people and technology, and the potential dangers of “permissionless” innovation that’s driven by power, wealth and a lack of accountability.</p>
<p>Similar insights emerge from a number of other movies, including Spielberg’s 2002 film “<a href="https://www.imdb.com/title/tt0181689/">Minority Report</a>” – which presaged a growing capacity for <a href="https://www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-predict-crime-is-it-biased-180968337/">AI-enabled crime prediction</a> and the ethical conundrums it’s raising – as well as the 2014 film “<a href="https://www.imdb.com/title/tt0470752/">Ex Machina</a>.”</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/244824/original/file-20181109-37973-1eh9qw0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/244824/original/file-20181109-37973-1eh9qw0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/244824/original/file-20181109-37973-1eh9qw0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/244824/original/file-20181109-37973-1eh9qw0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/244824/original/file-20181109-37973-1eh9qw0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/244824/original/file-20181109-37973-1eh9qw0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/244824/original/file-20181109-37973-1eh9qw0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/244824/original/file-20181109-37973-1eh9qw0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Movie geniuses always have blind spots that viewers can hopefully learn from.</span>
<span class="attribution"><a class="source" href="https://www.imdb.com/title/tt0470752/mediaviewer/rm1897135872">Universal Pictures International</a></span>
</figcaption>
</figure>
<p>As with “Jurassic Park,” “Ex Machina” centers around a wealthy and unaccountable entrepreneur who is supremely confident in his own abilities. In this case, the technology in question is artificial intelligence. </p>
<p>The movie tells a tale of an egotistical genius who creates a remarkable intelligent machine – but he lacks the awareness to recognize his limitations and the risks of what he’s doing. It also provides a chilling insight into potential dangers of creating machines that know us better than we know ourselves, while not being bound by human norms or values.</p>
<p>The result is a sobering reminder of how, without humility and a good dose of humanity, our innovations can come back to bite us.</p>
<p>The technologies in “Jurassic Park,” “Minority Report” and “Ex Machina” lie beyond what is currently possible. Yet these films are often close enough to emerging trends that they help reveal the dangers of irresponsible, or simply naive, innovation. This is where these and other science fiction movies can help innovators better understand the social challenges they face and how to navigate them. </p>
<h2>Real-world problems worked out on-screen</h2>
<p>In a recent op-ed in The New York Times, journalist Kara Swisher asked, “<a href="https://www.nytimes.com/2018/10/21/opinion/who-will-teach-silicon-valley-to-be-ethical.html">Who will teach Silicon Valley to be ethical</a>?” Prompted by a growing litany of socially questionable decisions amongst tech companies, Swisher suggests that many of them need to grow up and get serious about ethics. But ethics alone are rarely enough. It’s easy for good intentions to get swamped by fiscal pressures and mired in social realities.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/244961/original/file-20181111-39548-r1p8kd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/244961/original/file-20181111-39548-r1p8kd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/244961/original/file-20181111-39548-r1p8kd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=693&fit=crop&dpr=1 600w, https://images.theconversation.com/files/244961/original/file-20181111-39548-r1p8kd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=693&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/244961/original/file-20181111-39548-r1p8kd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=693&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/244961/original/file-20181111-39548-r1p8kd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=871&fit=crop&dpr=1 754w, https://images.theconversation.com/files/244961/original/file-20181111-39548-r1p8kd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=871&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/244961/original/file-20181111-39548-r1p8kd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=871&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Elon Musk has shown that brilliant tech innovators can take ethical missteps along the way.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/SpaceX-Moon/f67fc5d84eb149ba8c1a3c3f059165ea/1/0">AP Photo/Chris Carlson</a></span>
</figcaption>
</figure>
<p>Technology companies increasingly need to find some way to break from business as usual if they are to become more responsible. High-profile cases involving companies like <a href="https://theconversation.com/facebook-is-killing-democracy-with-its-personality-profiling-data-93611">Facebook</a> and <a href="https://theconversation.com/uber-cant-be-ethical-its-business-model-wont-allow-it-85015">Uber</a> as well as Tesla’s <a href="https://www.theverge.com/2018/9/27/17911428/sec-lawsuit-elon-musk-tesla-funding-tweet">Elon Musk</a> have highlighted the social as well as the business dangers of operating without fully understanding the consequences of people-oriented actions. </p>
<p>Many more companies are struggling to create socially beneficial technologies and discovering that, without the necessary insights and tools, they risk blundering about in the dark.</p>
<p>For instance, earlier this year, researchers from Google and DeepMind <a href="https://arxiv.org/pdf/1807.05162.pdf">published details of an artificial intelligence-enabled system</a> that can lip-read far better than people. According to the paper’s authors, the technology has enormous potential to improve the lives of people who have trouble speaking aloud. Yet it doesn’t take much to imagine how this same technology could threaten the privacy and security of millions – especially when coupled with long-range surveillance cameras.</p>
<p>Developing technologies like this in socially responsible ways requires more than good intentions or simply establishing an ethics board. People need a sophisticated understanding of the often complex dynamic between technology and society. And while, <a href="https://www.theguardian.com/technology/2018/oct/12/tech-humanities-misinformation-philosophy-psychology-graduates-mozilla-head-mitchell-baker">as Mozilla’s Mitchell Baker suggests</a>, scientists and technologists engaging with the humanities can be helpful, it’s not enough.</p>
<h2>Movies are an easy way into a serious discipline</h2>
<p>The “new formulation” of complementary skills Baker says innovators desperately need already exists in a thriving interdisciplinary community focused on <a href="https://theconversation.com/us/topics/responsible-innovation-31243">socially responsible innovation</a>. My home institution, the <a href="http://sfis.asu.edu">School for the Future of Innovation in Society</a> at Arizona State University, is just one part of this. </p>
<p>Experts within this global community are actively exploring ways to translate good ideas into responsible practices. And this includes the need for <a href="https://doi.org/10.1038/nnano.2015.196">creative insights into the social landscape around technology innovation</a>, and the imagination to develop novel ways to navigate it.</p>
<p>Here is where science fiction movies become a powerful tool for guiding innovators, technology leaders and the companies where they work. Their fictional scenarios can reveal potential pitfalls and opportunities that can help steer real-world decisions toward socially beneficial and responsible outcomes, while avoiding unnecessary risks.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/244826/original/file-20181109-34102-1kuntvu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/244826/original/file-20181109-34102-1kuntvu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/244826/original/file-20181109-34102-1kuntvu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=459&fit=crop&dpr=1 600w, https://images.theconversation.com/files/244826/original/file-20181109-34102-1kuntvu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=459&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/244826/original/file-20181109-34102-1kuntvu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=459&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/244826/original/file-20181109-34102-1kuntvu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=576&fit=crop&dpr=1 754w, https://images.theconversation.com/files/244826/original/file-20181109-34102-1kuntvu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=576&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/244826/original/file-20181109-34102-1kuntvu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=576&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">People love to come together as a movie audience.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/nationalarchives/3002426059">The National Archives UK</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>And science fiction movies bring people together. By their very nature, these films are social and educational levelers. Look at who’s watching and discussing the latest sci-fi blockbuster, and you’ll often find a diverse cross-section of society. The genre can help build bridges between people who know how science and technology work, and those who know what’s needed to ensure they work for the good of society.</p>
<p>This is the underlying theme in my new book “<a href="https://mango.bz/books/films-from-the-future-by-andrew-maynard-458-b">Films from the Future: The Technology and Morality of Sci-Fi Movies</a>.” It’s written for anyone who’s curious about emerging trends in technology innovation and how they might potentially affect society. But it’s also written for innovators who want to do the right thing and just don’t know where to start.</p>
<p>Of course science fiction films alone aren’t enough to ensure socially responsible innovation. But they can help reveal some profound societal challenges facing technology innovators and possible ways to navigate them. And what better way to learn how to innovate responsibly than to invite some friends round, open the popcorn and put on a movie?</p>
<p>It certainly beats being blindsided by risks that, with hindsight, could have been avoided.</p><img src="https://counter.theconversation.com/content/105714/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrew Maynard is author of the book "Films from the Future: The Technology and Morality of Sci-Fi Movies" (published by Mango), on which this article is based. </span></em></p>As fictional inventors make terrible choices on the big screen, real-world tech innovators can learn from their example how not to make the same kinds of ethical mistakes.Andrew Maynard, Director, Risk Innovation Lab, Arizona State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/936812018-03-21T10:47:01Z2018-03-21T10:47:01ZAfter Tempe fatality, self-driving car developers must engage with public now or risk rejection<figure><img src="https://images.theconversation.com/files/211270/original/file-20180321-80630-11uj418.jpg?ixlib=rb-1.1.0&rect=1%2C77%2C1055%2C518&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">An autonomous vehicle struck and killed a pedestrian on March 18.</span> <span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Self-Driving-Vehicle-Fatality/d7a9837e1275452d84aed6ea70a7f9ec/2/0">ABC-15.com via AP</a></span></figcaption></figure><p>On Sunday evening, March 18, an Uber SUV hit and killed a pedestrian <a href="https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html">in the Arizona city of Tempe</a>. In a place where <a href="https://www.azcentral.com/story/news/local/phoenix-breaking/2018/03/13/arizona-official-10-pedestrian-deaths-week-show-major-crisis/422808002/">vehicle-related pedestrian fatalities are unfortunately a regular occurrence</a>, this shouldn’t have stood out as particularly unusual. But what made the death of 49-year-old Elaine Herzberg different was that the vehicle that killed her was driving itself.</p>
<p><a href="https://scholar.google.com/citations?user=IgLsSdQAAAAJ&hl=en&oi=ao">As faculty</a> <a href="https://scholar.google.com/citations?user=VX2EqQgAAAAJ&hl=en&oi=ao">on Arizona State University’s Tempe campus</a> <a href="https://scholar.google.com/citations?user=ZylR35wAAAAJ&hl=en&oi=ao">who also study</a> <a href="https://scholar.google.com/citations?user=b8NhWc4AAAAJ&hl=en&oi=ao">technology innovation</a>, we’ve become used to seeing self-driving cars operated by Uber, Waymo and others on our daily commutes. We know that our neighbors and students are largely excited that our streets are being used to test self-driving technologies. And we love pointing out the cars and SUVs topped with spinning sensors to colleagues from out of town. But we also know that the people who live and work here have local knowledge and values that are being ignored by those who are designing and testing these new technologies.</p>
<p>In 2015, Arizona famously opened its doors to such vehicles by <a href="http://azmemory.azlibrary.gov/cdm/ref/collection/execorders/id/752">encouraging companies to test out their self-driving cars on public roads</a>. And apart from a <a href="https://www.wired.com/2017/03/uber-self-driving-crash-tempe-arizona/">relatively minor crash in 2017</a>, there have been few serious incidents. Yet despite this record, the lack of coordination, collaboration and transparency between industry, city government and the public – even on issues as basic as road safety – has created an environment where the future success and safe use of self-driving cars is far from certain.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/211271/original/file-20180321-80649-rzv937.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/211271/original/file-20180321-80649-rzv937.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/211271/original/file-20180321-80649-rzv937.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=442&fit=crop&dpr=1 600w, https://images.theconversation.com/files/211271/original/file-20180321-80649-rzv937.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=442&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/211271/original/file-20180321-80649-rzv937.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=442&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/211271/original/file-20180321-80649-rzv937.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=555&fit=crop&dpr=1 754w, https://images.theconversation.com/files/211271/original/file-20180321-80649-rzv937.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=555&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/211271/original/file-20180321-80649-rzv937.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=555&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A 2017 autonomous vehicle crash in Tempe didn’t kill anyone.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Self-Driving-Cars-Arizona/0fd5dee09ccc496ea5a7e38ebf968ed8/1/0">Tempe Police Department via AP</a></span>
</figcaption>
</figure>
<h2>Bearing the risks, without a voice</h2>
<p>No matter what, 4,000 pounds of steel and plastic hurtling along at speed is dangerous, whether a computer or a person is in control. Sunday’s incident forces society to think more critically about the <a href="https://theconversation.com/to-get-the-most-out-of-self-driving-cars-tap-the-brakes-on-their-rollout-88444">risks we’re willing to take</a> as self-driving cars are tested on our roads, and who gets to make these decisions.</p>
<p>Sadly, Elaine Herzberg’s death occurred just weeks after Arizona Governor <a href="http://azmemory.azlibrary.gov/cdm/singleitem/collection/execorders/id/773/rec/1">Doug Ducey signed an executive order</a> requiring self-driving cars in the state to meet specific safety standards. The provisions in this executive order have yet to go into effect. But even if they had been fully implemented, it’s not clear whether they would have prevented this collision. It’s even less clear that they would help local communities trust, and see the benefits of, the self-driving cars on their streets.</p>
<p>For this, we desperately need a broader conversation about self-driving vehicles and how to develop and use them responsibly. And this means product developers and policymakers must actually talk with people who are potentially affected by the technology.</p>
<p>Residents don’t have the technical expertise of the autonomous vehicle developers. But they likely do have insights that would substantially enhance the safety and trustworthiness of the vehicles being tested. For instance, local communities might have suggested that testing an unproven technology in a school zone when children are present might not be the best idea. Or that maybe self-driving cars shouldn’t be experimented with late at night in poorly lit places, where it’s hard to see and predict how pedestrians might behave. If companies like Uber had met with Elaine and others like her, maybe things would have turned out differently.</p>
<h2>Innovating experts tend to forget other views</h2>
<p>Unfortunately, this level of public engagement is often overlooked in technology innovation. It doesn’t help that the American public is frequently seen by innovators and developers as so uninformed or biased as to be of little use in conversations about emerging technologies. But in our experience, listening to the values, concerns and aspirations of the general public can inject new and valuable ideas into the ways in which research programs are run.</p>
<p>From our own experiences with other emerging technologies – ranging from nanotechnology and synthetic biology to artificial intelligence, smart cities and autonomous vehicles – we know there are some basic guidelines that can help support successful, safe and beneficial innovation. These include partnering with experts who know something about socially responsible innovation, engaging with and listening to communities who are potentially affected by the technology and paying attention to what people actually want – as well as what they do not.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/211276/original/file-20180321-80621-1g3jnxo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/211276/original/file-20180321-80621-1g3jnxo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/211276/original/file-20180321-80621-1g3jnxo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=472&fit=crop&dpr=1 600w, https://images.theconversation.com/files/211276/original/file-20180321-80621-1g3jnxo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=472&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/211276/original/file-20180321-80621-1g3jnxo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=472&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/211276/original/file-20180321-80621-1g3jnxo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=593&fit=crop&dpr=1 754w, https://images.theconversation.com/files/211276/original/file-20180321-80621-1g3jnxo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=593&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/211276/original/file-20180321-80621-1g3jnxo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=593&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Ask Dr. Frankenstein about the hazards of innovating in isolation.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/conso/2455893997">Paolo</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>Of course, implementing these rules of thumb is far from simple. But hard experience shows that not engaging with stakeholders can have disastrous consequences. Nuclear power, genetically modified foods and many other innovations all struggled to reach their potential in part because leaders in the field didn’t think to consult with outsiders as part of the innovation process. Many “experts” don’t automatically think about what they can gain from partnering up with “regular” people.</p>
<p>These “nonexperts” may not be able to contribute directly to the technology. (Although even here, <a href="https://cspo.org/news/rightful-place-of-science-citizen-science/">citizen science</a> suggests that the capabilities of regular people get underestimated.) But they can offer unique insights to help ensure technologies align with what they value. And ultimately, these are the people who will decide whether a technology succeeds, or not.</p>
<p>These are challenges we face on a daily basis in our own work, whether it’s in developing smart cities, engaging with various stakeholders around emerging issues or exploring how to ensure new technologies are safe, beneficial and responsible. Through experience, as well as specific initiatives like <a href="https://ecastnetwork.org/">Expert and Citizen Engagement of Science and Technology (ECAST)</a> – a network of academics, informal science educators and nonpartisan think tanks we collaborate with – we’ve learned that if innovators, scientists and policymakers engage early and often with people who understand responsible innovation, they can avoid unpleasant surprises down the pike.</p>
<p>Engaging constructively with members of the public early on can help develop technologies and regulations that are better aligned with what people want and are willing to support. For instance, in the <a href="http://www.newtcenter.org/">Nanotechnology Enabled Water Treatment</a> program – a collaboration between ASU, Rice University, the University of Texas El Paso and Yale University – we are working with both manufacturers and consumers to ensure that domestic nanotechnology-enabled water filters are effective, safe and accepted by users. Our commercial partners know that if there is no public engagement, they risk losing public trust – however good the technology. </p>
<p>Unfortunately, this type of collaboration isn’t currently happening as much as is needed in Arizona with self-driving cars. Despite working in Tempe and being intimately involved in responsible innovation and public engagement, we’ve seen precious few attempts by companies like Uber and Waymo to talk with and listen to local communities. When everything’s going smoothly, not engaging doesn’t seem like a big deal. But when problems do arise, this lack of engagement could severely weaken the foundations on which self-driving technologies will be built.</p>
<p>Whether the fatal collision in Tempe is enough to shake these foundations remains to be seen. But it is a wake-up call to developers and policymakers that the honeymoon period with self-driving cars may be coming to an end. And as it does, there’ll be a greater need than ever to engage and partner with people who can help ensure the self-driving future the public wants.</p><img src="https://counter.theconversation.com/content/93681/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrew Maynard is affiliated with the World Economic Forum Council on the Future of Technology, Values and Policy </span></em></p><p class="fine-print"><em><span>Thaddeus R. Miller receives funding from the US National Science Foundation. </span></em></p><p class="fine-print"><em><span>Ira Bennett and Jameson Wetmore do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Companies developing autonomous vehicles are missing out on the local knowledge and values of the people who live where these cars are tested. And that lack of engagement sets up bigger problems.Andrew Maynard, Director, Risk Innovation Lab, Arizona State UniversityJameson Wetmore, Associate Professor, School for the Future of Innovation in Society, Arizona State UniversityThaddeus R. Miller, Assistant Professor, School for the Future of Innovation in Society and The Polytechnic School, Arizona State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/935232018-03-20T10:42:01Z2018-03-20T10:42:01ZEager to dye your hair with ‘nontoxic’ graphene nanoparticles? Not so fast!<figure><img src="https://images.theconversation.com/files/211082/original/file-20180319-31624-18d3y07.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Subbing new risks for the current dyes’ dangers?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/hairdresser-salon-woman-during-hair-wash-1044886945">Evgeny Savchenko/Shutterstock.com</a></span></figcaption></figure><p>Graphene is something of a celebrity in the world of nanoscale materials. Isolated in 2004 by Nobel Prize winners <a href="https://www.nobelprize.org/nobel_prizes/physics/laureates/2010/">Andre Geim and Konstantin Novoselov</a>, these ultrathin sheets of carbon atoms are already finding novel uses in areas like <a href="https://www.nist.gov/programs-projects/graphene-electronics">electronics</a>, <a href="https://spectrum.ieee.org/nanoclast/green-tech/conservation/graphene-heating-system-dramatically-reduces-home-energy-costs">high-efficiency heating systems</a>, <a href="https://www.ft.com/content/d768030e-d8ec-11e7-9504-59efdb70e12f">water purification technologies</a> and <a href="http://cmp.callawaygolf.com/2018/01/23/chrome-soft-golf-balls-need-know/">even golf balls</a>. According to recent research published in the journal Chem, <a href="https://doi.org/10.1016/j.chempr.2018.02.021">hair dyes can now be added to this list</a>. </p>
<p>But how safe and responsible is this new use of the carbon-based wonder-material?</p>
<p>Northwestern University’s <a href="https://www.eurekalert.org/pub_releases/2018-03/nu-gfn031218.php">press release</a> proudly announced, “Graphene finds new application as nontoxic, anti-static hair dye.” The announcement spawned headlines like “<a href="http://www.sciencemag.org/news/2018/03/enough-toxic-hair-dyes-we-could-use-graphene-instead">Enough with the toxic hair dyes. We could use graphene instead</a>,” and “<a href="http://en.brinkwire.com/215369/miracle-material-graphene-used-to-create-the-ultimate-hair-dye/">’Miracle material’ graphene used to create the ultimate hair dye</a>.” </p>
<p>From these headlines, you might be forgiven for getting the idea that the safety of graphene-based hair dyes is a done deal. Yet <a href="https://scholar.google.com/citations?user=b8NhWc4AAAAJ&hl=en&oi=ao">having studied the potential health and environmental impacts</a> of engineered nanomaterials for <a href="http://dx.doi.org/10.1038/nnano.2016.270">more years than I care to remember</a>, I find such overly optimistic pronouncements worrying – especially when they’re not backed up by clear evidence.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/211008/original/file-20180319-31602-zpomir.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/211008/original/file-20180319-31602-zpomir.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/211008/original/file-20180319-31602-zpomir.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/211008/original/file-20180319-31602-zpomir.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/211008/original/file-20180319-31602-zpomir.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/211008/original/file-20180319-31602-zpomir.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/211008/original/file-20180319-31602-zpomir.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/211008/original/file-20180319-31602-zpomir.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">As the dye wears off, where do the nanoparticles go?</span>
<span class="attribution"><span class="source">Jiaxing Huang, Northwestern University</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>Tiny materials, potentially bigger problems</h2>
<p>Engineered nanomaterials like graphene and graphene oxide (the particular form used in the dye experiments) aren’t necessarily harmful. But nanomaterials can behave in unusual ways that depend on particle size, shape, chemistry and application. Because of this, researchers have long been cautious about giving them a clean bill of health without first testing them extensively. And while a <a href="http://dx.doi.org/10.1021/acsnano.7b04120">large body of research to date</a> doesn’t indicate graphene is particularly dangerous, neither does it suggest it’s completely safe.</p>
<p>A quick search of scientific papers over the past few years shows that, since 2004, over 2,000 studies have been published that mention graphene toxicity; nearly 500 were published in 2017 alone.</p>
<p>This growing body of research suggests that if graphene gets into your body or the environment in sufficient quantities, it could cause harm. A 2016 review, for instance, indicated that graphene oxide particles could <a href="http://dx.doi.org/10.1016/j.addr.2016.04.028">result in lung damage at high doses</a> (equivalent to around 0.7 grams of inhaled material). Another review published in 2017 suggested that these <a href="http://dx.doi.org/10.1088/2053-1583/aa5476">materials could affect the biology</a> of some plants and algae, as well as invertebrates and vertebrates toward the lower end of the ecological pyramid. The authors of the 2017 study concluded that research “unequivocally confirms that graphene in any of its numerous forms and derivatives must be approached as a potentially hazardous material.” </p>
<p>These studies need to be approached with care, as the precise risks of graphene exposure will depend on how the material is used, how exposure occurs and how much of it is encountered. Yet there’s sufficient evidence to suggest that this substance should be used with caution – especially where there’s a high chance of exposure or that it could be released into the environment.</p>
<p>Unfortunately, graphene-based hair dyes tick both of these boxes. Used in this way, the substance is potentially inhalable (especially with spray-on products) and ingestible through careless use. It’s also almost guaranteed that excess graphene-containing dye will wash down the drain and into the environment. </p>
<p>Here, due diligence is needed to ensure that the material is acceptably safe. This is something that goes beyond the seeming authority of a press release headline. In fact, such misleading headlines could end up being counterproductive, as they undermine efforts to demonstrate trustworthiness with consumers and investors.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/211020/original/file-20180319-31627-1nv890z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/211020/original/file-20180319-31627-1nv890z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/211020/original/file-20180319-31627-1nv890z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=317&fit=crop&dpr=1 600w, https://images.theconversation.com/files/211020/original/file-20180319-31627-1nv890z.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=317&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/211020/original/file-20180319-31627-1nv890z.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=317&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/211020/original/file-20180319-31627-1nv890z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=398&fit=crop&dpr=1 754w, https://images.theconversation.com/files/211020/original/file-20180319-31627-1nv890z.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=398&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/211020/original/file-20180319-31627-1nv890z.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=398&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Simulation of a graphene oxide framework, pictured in black, to remove contaminants from water.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/oakridgelab/14006201292">Adrien Nicolaï/RPI</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<h2>Undermining other efforts?</h2>
<p>I was alerted to just how counterproductive such headlines can be by my colleague Tim Harper, founder of <a href="http://g2o.co/">G2O Water Technologies</a> – a company that uses graphene oxide-coated membranes to treat wastewater. Like many companies in this area, G2O has been working to use graphene responsibly by minimizing the amount of graphene that ends up released to the environment.</p>
<p>Yet as Tim pointed out to me, if people are led to believe “that bunging a few grams of graphene down the drain every time you dye your hair is OK, this invalidates all the work we are doing making sure the few nanograms of graphene on our membranes stay put.” Many companies that use nanomaterials are trying to do the right thing, but it’s hard to justify the time and expense of being responsible when someone else’s more cavalier actions undercut your efforts.</p>
<p>Here, naïve claims of safety and gung-ho approaches to promoting graphene-containing products could very easily threaten the responsible development and use of this material. And if companies pull back from acting responsibly, there’s a danger that consumers, investors and even regulators, will lose trust in their ability to ensure the safety of products of all kinds. </p>
<p>If this happens, consumers will be the ultimate losers. Used responsibly, graphene could lead to more sustainable and environmentally benign products. Yet having watched the public backlash against technologies like genetic engineering over the past couple of decades, I’m acutely aware that failing to earn the trust of stakeholders and consumers can stymie technologies, regardless of how safe and beneficial they are.</p>
<h2>Overpromising results and overlooking risk</h2>
<p>This is where researchers and their institutions need to move beyond an “<a href="https://doi.org/10.1038/nnano.2008.14">economy of promises</a>” that spurs on hyperbole and discourages caution, and think more critically about how their statements may ultimately undermine responsible and beneficial development of a technology. They may even want to consider using guidelines, such as the <a href="http://societyinside.com/sites/default/files/Principles%20for%20Responsible%20Innovation%20Short%20February%202018_0.pdf">Principles for Responsible Innovation</a> developed by the organization <a href="http://societyinside.com/">Society Inside</a>, for instance, to guide what they do and say.</p>
<p>To their credit, the authors of the dye study did give a passing mention to research on graphene safety, mostly focusing on an assumed level of safety compared to current dye products. Yet even this perfunctory level of caution failed to make it into the <a href="https://www.eurekalert.org/pub_releases/2018-03/nu-gfn031218.php">press release</a>, which touted a “new hair dye that is nontoxic, nondamaging and lasts through many washes without fading.”</p>
<p>It may turn out that graphene-based hair dyes can be developed safely. To be fair, the reported application isn’t even close to commercial R&D yet, never mind the salon shelf. And certainly, there’s a case to be made for substituting some of the <a href="https://www.nytimes.com/2018/03/16/science/hair-dye-graphene.html">harsh chemicals currently used in some products</a> with more benign ones. But this won’t happen while researchers and their institutions gloss over legitimate concerns and cautions with blind optimism. </p>
<p>Rather, by taking more care in how nanomaterial research is framed and promoted, researchers and their academic institutions could do a lot to ensure future nano-enabled consumer products are safe, beneficial and, above all, responsible.</p><img src="https://counter.theconversation.com/content/93523/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrew Maynard receives support from the National Science Foundation as part of the Nanotechnology-Enabled Water Treatment (NEWT) Engineering Research Center. </span></em></p>Less-toxic hair dye would be a great invention. But discounting the risks that come with nanoparticles could undermine other efforts to protect human health and environmental from their effects.Andrew Maynard, Director, Risk Innovation Lab, Arizona State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/652152016-09-14T10:05:33Z2016-09-14T10:05:33ZConsidering ethics now before radically new brain technologies get away from us<figure><img src="https://images.theconversation.com/files/137669/original/image-20160913-4955-1hxmw14.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Now's the time to think about what we're getting into with neurotechnologies.</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/pic.mhtml?id=133182821">Brain image via www.shutterstock.com.</a></span></figcaption></figure><p>Imagine infusing thousands of wireless devices into your brain, and using them to both monitor its activity and directly influence its actions. It sounds like the stuff of science fiction, and for the moment it still is – but possibly not for long.</p>
<p>Brain research is on a roll at the moment. And as it converges with advances in science and technology more broadly, it’s transforming what we are likely to be able to achieve in the near future. </p>
<p>Spurring the field on is the promise of more effective treatments for debilitating neurological and psychological disorders such as <a href="http://www.ninds.nih.gov/disorders/epilepsy/epilepsy.htm">epilepsy</a>, <a href="http://www.ninds.nih.gov/disorders/parkinsons_disease/parkinsons_disease.htm">Parkinson’s disease</a> and <a href="https://www.nimh.nih.gov/health/topics/depression/index.shtml">depression</a>. But new brain technologies will increasingly have the potential to alter how someone thinks, feels, behaves and even perceives themselves and others around them – and not necessarily in ways that are within their control or with their consent.</p>
<p>This is where things begin to get ethically uncomfortable.</p>
<p>Because of concerns like these, the U.S. National Academies of Sciences, Engineering and Medicine (NAS) are <a href="http://www.nationalacademies.org/hmd/Activities/Research/NeuroForum/2016-SEP-15.aspx">cohosting a meeting of experts this week</a> on responsible innovation in brain science.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/oO0zy30n_jQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Berkeley’s ‘neural dust’ sensors are one of the latest neurotech advances.</span></figcaption>
</figure>
<h2>Where are neurotechnologies now?</h2>
<p>Brain research is intimately entwined with advances in the “neurotechnologies” that not only help us study the brain’s inner workings, but also transform the ways we can interact with and influence it.</p>
<p>For example, researchers at the University of California Berkeley recently <a href="http://news.berkeley.edu/2016/08/03/sprinkling-of-neural-dust-opens-door-to-electroceuticals/">published the first in-animal trials of what they called “neural dust”</a> – implanted millimeter-sized sensors. They inserted the sensors in <a href="http://dx.doi.org/10.1016/j.neuron.2016.06.034">the nerves and muscles of rats</a>, showing that these miniature wirelessly powered and connected sensors can monitor neural activity. The long-term aim, though, is to introduce thousands of neural dust particles <a href="http://arxiv.org/abs/1307.2196">into human brains</a>.</p>
<p>The UC Berkeley sensors are still relatively large, on par with a coarse piece of sand, and just report on what’s happening around them. Yet advances in nanoscale fabrication are likely to enable their further miniaturization. (The researchers estimate they could be made <a href="https://arxiv.org/abs/1307.2196">thinner than a human hair</a>.) And in the future, combining them with technologies like <a href="http://www.scientificamerican.com/article/optogenetics-controlling/">optogenetics</a> – using light to stimulate genetically modified neurons – could enable wireless, localized brain interrogation and control.</p>
<p>Used in this way, future generations of neural dust could transform how chronic neurological disorders are managed. They could also enable hardwired brain-computer interfaces (the <a href="https://arxiv.org/abs/1307.2196">original motivation behind this research</a>), or even be used to enhance cognitive ability and modify behavior.</p>
<p>In 2013, President Obama launched the multi-year, multi-million dollar <a href="https://www.whitehouse.gov/BRAIN">U.S. BRAIN Initiative</a> (Brain Research through Advancing Innovative Neurotechnologies). The same year, the European Commission launched the <a href="https://www.humanbrainproject.eu/">Human Brain Project</a>, focusing on advancing brain research, cognitive neuroscience and brain-inspired computing. There are also active brain research initiatives in <a href="https://www.sfn.org/news-and-calendar/neuroscience-quarterly/spring-2016/china-qa">China</a>, <a href="http://rstb.royalsocietypublishing.org/content/370/1668/20140310">Japan</a>, <a href="http://english.yonhapnews.co.kr/business/2016/05/30/0504000000AEN20160530008200320.html">Korea</a>, <a href="http://www.labman.org/">Latin America</a>, <a href="http://israelbrain.org/">Israel</a>, <a href="http://bluebrain.epfl.ch/">Switzerland</a>, <a href="http://www.braincanada.ca/">Canada</a> and even <a href="http://www.ncbi.nlm.nih.gov/pubmed/21870466">Cuba</a>.</p>
<p>Together, these represent an emerging and globally coordinated effort to not only better understand how the brain works, but to find new ways of controlling and enhancing it (in particular in disease treatment and prevention); to interface with it; and to build computers and other artificial systems that are inspired by it.</p>
<h2>Cutting-edge tech comes with ethical questions</h2>
<p>This week’s <a href="http://www.nationalacademies.org/hmd/Activities/Research/NeuroForum/2016-SEP-15.aspx">NAS workshop</a> – organized by the <a href="https://www.innovationpolicyplatform.org/project-emerging-technologies-and-brain-oecd-bnct">Organization for Economic Cooperation and Development</a> and supported by the National Science Foundation and my home institution of Arizona State University – isn’t the first gathering of experts to discuss the ethics of brain technologies. In fact there’s already an active international community of experts addressing “<a href="https://en.wikipedia.org/wiki/Neuroethics">neuroethics</a>.”</p>
<p>Many of these scientific initiatives do have a prominent ethics component. The U.S. BRAIN initiative for example includes a <a href="https://www.braininitiative.nih.gov/about/newg.htm">Neuroethics Workgroup</a>, while the E.C. Human Brain Project is using an <a href="https://www.humanbrainproject.eu/2016-ethics">Ethics Map</a> to guide research and development. These and others are grappling with the formidable challenges of developing future neurotechnologies responsibly.</p>
<p>It’s against this backdrop that the NAS workshop sets out to better understand the social and ethical opportunities and challenges emerging from global brain research and neurotechnologies. A goal is to identify ways of ensuring these technologies are developed in ways that are responsive to social needs, desires and concerns. And it comes at a time when brain research is beginning to open up radical new possibilities that were far beyond our grasp just a few years ago.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=540&fit=crop&dpr=1 600w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=540&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=540&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=679&fit=crop&dpr=1 754w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=679&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=679&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Transcranial magnetic stimulation uses a powerful and rapidly changing electrical current to excite neural processes in the brain, similar to direct stimulation with electrodes.</span>
<span class="attribution"><span class="source">Eric Wassermann, M.D.</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>In 2010, for instance, researchers at MIT demonstrated that Transcranial Magnetic Stimulation, or TMS – a noninvasive neurotechnology – <a href="http://news.mit.edu/2010/moral-control-0330">could temporarily alter someone’s moral judgment</a>. Another noninvasive technique called <a href="https://www.wired.com/2014/01/read-zapping-brain/">transcranial Direct Current Stimulation</a> (tDCS) delivers low-level electrical currents to the brain via electrodes on the scalp; it’s being explored as a <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3270156/">treatment for clinical conditions from depression to chronic pain</a> – while already being used in <a href="http://foc.us/">consumer products</a> and by <a href="http://www.wsj.com/articles/the-weird-world-of-brain-hacking-1447096569">do-it-yourselfers</a> to allegedly self-induce changes in mental state and ability.</p>
<p>Crude as current capabilities using TMS and tDCS are, they are forcing people to think about the responsible development and use of technologies which have the ability to potentially change behavior, personality and thinking ability, at the flick of a switch. And the ethical questions they raise are far from straightforward.</p>
<p>For instance, should students be allowed to take exams while using tDCS? Should teachers be able to use tDCS in the classroom? Should TMS be used to prevent a soldier’s moral judgment from interfering with military operations?</p>
<p>These and similar questions grapple with what is already possible. Complex as they are, they pale against the challenges emerging neurotechnologies are likely to raise.</p>
<h2>Preparing now for what’s to come</h2>
<p>As research leads to an increasingly sophisticated and fine-grained understanding of how our brains function, related neurotechnologies are likely to become equally sophisticated. As they do, our abilities to precisely control function, thinking, behavior and personality, will extend far beyond what is currently possible.</p>
<p>To get a sense of the emerging ethical and social challenges such capabilities potentially raise, consider this speculative near-future scenario:</p>
<p>Imagine that in a few years’ time, the UC Berkeley neural dust has been successfully miniaturized and combined with optogenetics, allowing thousands of micrometer-sized devices to be seeded through someone’s brain that are capable of monitoring and influencing localized brain functions. Now imagine this network of neural transceivers is wirelessly connected to an external computer, and from there, to the internet.</p>
<p>Such a network – a crude foreshadowing of science fiction author <a href="http://www.goodreads.com/author/show/5807106.Iain_M_Banks">Iain M. Banks</a>’ “neural lace” (a concept that has <a href="http://www.newsweek.com/elon-musk-neural-lace-ai-artificial-intelligence-465638">already grabbed the attention of Elon Musk</a>) – would revolutionize the detection and treatment of neurological conditions, potentially improving quality of life for millions of people. It would enable external devices to be controlled through thought, effectively integrating networked brains into the Internet of Things. It could help overcome restricted physical abilities for some people. And it would potentially provide unprecedented levels of cognitive enhancement, by allowing people to interface directly with cloud-based artificial intelligence and other online systems. </p>
<p>Think Apple’s Siri or Amazon’s Echo hardwired into your brain, and you begin to get the idea.</p>
<p>Yet this neurotech – which is almost within reach of current technological capabilities – would not be risk-free. These risks could be social – a growing socioeconomic divide perhaps between those who are neuro-enhanced and those who are not. Or they could be related to privacy and autonomy – maybe the ability of employers and law enforcement to monitor, and even alter, thoughts and feelings. The innovation might threaten personal well-being and societal cohesion through (hypothetical) cyber substance abuse, where direct-to-brain code replaces psychoactive substances. It could make users highly vulnerable to neurological cyberattacks.</p>
<p>Of course, predicting and responding to possible future risks is fraught with difficulties, and depends as much on who considers what a risk (and to whom) as it does the capabilities of emerging technologies to do harm. Yet it’s hard to avoid the likely disruptive potential of near-future neurotechnologies. Thus the urgent need to address – as a society – what we want the future of brain technologies to look like.</p>
<p>Moving forward, the ethical and responsible development of emerging brain technologies will require new thinking, along with considerable investment, in what might go wrong, and how to avoid it. Here, we can learn from thinking about responsible and ethical innovation that has come to light around <a href="https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA">recombinant DNA</a>, <a href="https://cns.asu.edu/viri">nanotechnology</a>, <a href="https://experimentearth.org/">geoengineering</a> and other cutting-edge areas of science and technology. </p>
<p>To develop future brain technologies both successfully and responsibly, we need to do so in ways that avoid potential pitfalls while not stifling innovation. We need approaches that ensure ordinary people can easily find out how these technologies might affect their lives – and they must have a say in how they’re used.</p>
<p>All this won’t necessarily be easy – responsible innovation rarely is. But through initiatives like this week’s NAS workshop and others, we have the opportunity to develop brain technologies that are profoundly beneficial, without getting caught up in an ethical minefield.</p><img src="https://counter.theconversation.com/content/65215/count.gif" alt="The Conversation" width="1" height="1" />
<h4 class="border">Disclosure</h4><p class="fine-print"><em><span>Andrew Maynard is a member of the ASU School for the Future of Innovation in Society, which is co-organizing the September 15-16 workshop on responsible innovation in brain science. </span></em></p>How will neurotech evolve? An NAS workshop this week focuses on social and ethical opportunities and challenges we face both now and down the road.Andrew Maynard, Director, Risk Innovation Lab, Arizona State UniversityLicensed as Creative Commons – attribution, no derivatives.