tag:theconversation.com,2011:/id/topics/neuroethics-31233/articlesNeuroethics – The Conversation2024-02-14T13:25:12Ztag:theconversation.com,2011:article/2225562024-02-14T13:25:12Z2024-02-14T13:25:12ZSeveral companies are testing brain implants – why is there so much attention swirling around Neuralink? Two professors unpack the ethical issues<figure><img src="https://images.theconversation.com/files/575184/original/file-20240213-26-hubky4.jpg?ixlib=rb-1.1.0&rect=0%2C6%2C2083%2C1427&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Brain-computer interfaces have the potential to transform some people's lives, but they raise a host of ethical issues, too.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/artificial-intelligence-brain-royalty-free-image/1195715936?phrase=brain+computer&adppopup=true">Andriy Onufriyenko/Moment via Getty Images</a></span></figcaption></figure><p><em>Putting a computer inside someone’s brain used to feel like the edge of science fiction. Today, <a href="https://www.gao.gov/products/gao-22-106118">it’s a reality</a>. Academic and commercial groups are testing “brain-computer interface” devices to enable people with disabilities to function more independently. Yet Elon Musk’s company, Neuralink, has put this technology front and center in debates about safety, ethics and neuroscience.</em> </p>
<p><em>In January 2024, Musk announced that Neuralink <a href="https://twitter.com/elonmusk/status/1752098683024220632">implanted its first chip</a> in a human subject’s brain. The Conversation reached out to two scholars at the University of Washington School of Medicine – <a href="https://depts.washington.edu/bhdept/nancy-s-jecker-phd-sheher">Nancy Jecker, a bioethicst</a>, and <a href="https://neurosurgery.uw.edu/bio/andrew-l-ko-md">Andrew Ko, a neurosurgeon</a> who implants brain chip devices – for their thoughts on the ethics of this new horizon in neuroscience.</em> </p>
<h2>How does a brain chip work?</h2>
<p>Neuralink’s coin-size device, called N1, is designed to enable patients to carry out actions just by concentrating on them, without moving their bodies.</p>
<p>Subjects in <a href="https://neuralink.com/pdfs/PRIME-Study-Brochure.pdf">the company’s PRIME study</a> – short for Precise Robotically Implanted Brain-Computer Interface – undergo surgery to place the device in a part of the brain that controls movement. The chip records and processes the brain’s electrical activity, then transmits this data to an external device, such as a phone or computer.</p>
<p>The external device “decodes” the patient’s brain activity, learning to associate certain patterns with the patient’s goal: moving a computer cursor up a screen, for example. Over time, the software can recognize a pattern of neural firing that consistently occurs while the participant is imagining that task, and then execute the task for the person. </p>
<p><a href="https://neuralink.com/#mission">Neuralink’s current trial</a> is focused on helping people with paralyzed limbs <a href="https://www.youtube.com/watch?v=z7o39CzHgug">control computers or smartphones</a>. Brain-computer interfaces, commonly called BCIs, can also be used to control devices <a href="https://doi.org/10.1080/17483107.2023.2211602">such as wheelchairs</a>.</p>
<h2>A few companies are testing BCIs. What’s different about Neuralink?</h2>
<p>Noninvasive devices positioned on the outside of a person’s head <a href="https://penntoday.upenn.edu/news/challenges-and-advances-brain-computer-interfaces">have been used in clinical trials for a long time</a>, but they have not received approval from the Food and Drug Administration for commercial development. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/575212/original/file-20240213-18-6c2r7t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A young woman in a green shirt sits with a wired contraption on her head as four other people look on." src="https://images.theconversation.com/files/575212/original/file-20240213-18-6c2r7t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/575212/original/file-20240213-18-6c2r7t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/575212/original/file-20240213-18-6c2r7t.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/575212/original/file-20240213-18-6c2r7t.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/575212/original/file-20240213-18-6c2r7t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/575212/original/file-20240213-18-6c2r7t.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/575212/original/file-20240213-18-6c2r7t.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A visitor experiences a BCI system during the 2023 China International Fair for Trade in Services in Beijing.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/visitor-experiences-domestic-brain-computer-interface-news-photo/1648339155?adppopup=true">Li Xin/Xinhua via Getty Images</a></span>
</figcaption>
</figure>
<p>There are other brain-computer devices, like Neuralink’s, that are <a href="https://doi.org/10.1038/d41586-024-00304-4">fully implanted and wireless</a>. However, <a href="https://neuralink.com/pdfs/PRIME-Study-Brochure.pdf">the N1 implant</a> combines more technologies in a single device: It can target individual neurons, record from thousands of sites in the brain and recharge its small battery wirelessly. These are important advances that could produce better outcomes.</p>
<h2>Why is Neuralink drawing criticism?</h2>
<p>Neuralink <a href="https://twitter.com/neuralink/status/1661857379460468736?lang=en">received FDA approval</a> for human trials in May 2023. Musk <a href="https://twitter.com/elonmusk/status/1752098683024220632">announced the company’s first human trial</a> on his social media platform, X – formerly Twitter – in January 2024.</p>
<p>Information about the implant, however, <a href="https://www.reuters.com/technology/want-details-elon-musks-brain-implant-trial-youll-have-ask-him-2024-02-02/">is scarce</a>, <a href="https://neuralink.com/pdfs/PRIME-Study-Brochure.pdf">aside from a brochure</a> aimed at recruiting trial subjects. Neuralink did not register at <a href="https://clinicaltrials.gov/">ClinicalTrials.gov</a>, as is <a href="https://clinicaltrials.gov/policy/faq">customary, and required by some academic journals</a>. </p>
<p>Some scientists are troubled by <a href="https://doi.org/10.1038/d41586-024-00304-4">this lack of transparency</a>. <a href="https://doi.org/10.1161/CIRCOUTCOMES.112.965798">Sharing information about clinical trials is important</a> because it helps other investigators learn about areas related to their research and can improve patient care. Academic journals can also be <a href="https://doi.org/10.1177/25152459211007467">biased toward positive results</a>, preventing researchers from learning from unsuccessful experiments. </p>
<p>Fellows at the Hastings Center, a bioethics think tank, have warned that Musk’s brand of “<a href="https://www.thehastingscenter.org/the-neuralink-patient-behind-the-musk/">science by press release, while increasingly common, is not science</a>.” They advise against relying on someone with a huge financial stake in a research outcome to function as the sole source of information.</p>
<p>When scientific research is funded by <a href="https://www.gao.gov/products/gao-23-105396">government agencies</a> or <a href="https://sciencephilanthropyalliance.org/">philanthropic groups</a>, its aim is to promote the public good. Neuralink, on the other hand, embodies <a href="https://www.propublica.org/article/what-is-private-equity">a private equity model</a>, which is <a href="https://thehill.com/opinion/healthcare/4365741-private-equity-is-buying-up-health-care-but-the-real-problem-is-why-doctors-are-selling/">becoming more common</a> <a href="https://www.press.jhu.edu/books/title/12719/ethically-challenged">in science</a>. Firms pooling funds from private investors to back science breakthroughs may strive to do good, but they also strive to maximize profits, which <a href="https://doi.org/10.1136/medethics-2021-107555">can conflict with patients’ best interests</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/575187/original/file-20240213-22-j0czv9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A phone screen shows a white page that says 'Elon Musk,' positioned below an abstract black design and the word 'NEURALINK.'" src="https://images.theconversation.com/files/575187/original/file-20240213-22-j0czv9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/575187/original/file-20240213-22-j0czv9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=366&fit=crop&dpr=1 600w, https://images.theconversation.com/files/575187/original/file-20240213-22-j0czv9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=366&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/575187/original/file-20240213-22-j0czv9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=366&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/575187/original/file-20240213-22-j0czv9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=460&fit=crop&dpr=1 754w, https://images.theconversation.com/files/575187/original/file-20240213-22-j0czv9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=460&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/575187/original/file-20240213-22-j0czv9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=460&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Neuralink’s first human implant was announced on Elon Musk’s social media platform X, formerly known as Twitter, in January 2024.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/elon-musk-account-on-twitter-and-neuralink-emblem-displayed-news-photo/1247138943?adppopup=true">NurPhoto via Getty Images</a></span>
</figcaption>
</figure>
<p>In 2022, the U.S. Department of Agriculture <a href="https://www.reuters.com/technology/musks-neuralink-faces-federal-probe-employee-backlash-over-animal-tests-2022-12-05/">investigated animal cruelty</a> at Neuralink, according to a Reuters report, after employees accused the company of rushing tests and botching procedures on test animals in a race for results. The agency’s inspection found no breaches, according to a letter from the USDA secretary to lawmakers, which Reuters reviewed. However, the secretary did note an “adverse surgical event” in 2019 that Neuralink had self-reported. </p>
<p>In a separate incident also reported by Reuters, the Department of Transportation <a href="https://www.reuters.com/technology/musk-brain-implant-company-violated-us-hazardous-material-transport-rules-2024-01-26/">fined Neuralink</a> for violating rules about transporting hazardous materials, including a flammable liquid. </p>
<h2>What other ethical issues does Neuralink’s trial raise?</h2>
<p>When brain-computer interfaces are used to help patients who suffer from disabling conditions function more independently, such as by helping them communicate or move about, this can profoundly improve their quality of life. In particular, it helps people recover a sense of their own agency or autonomy – one of <a href="https://depts.washington.edu/bhdept/ethics-medicine/bioethics-topics/articles/principles-bioethics">the key tenets</a> of medical ethics. </p>
<p>However well-intentioned, medical interventions can produce unintended consequences. With BCIs, scientists and ethicists are particularly concerned about the potential for <a href="https://theconversation.com/brain-computer-interfaces-could-allow-soldiers-to-control-weapons-with-their-thoughts-and-turn-off-their-fear-but-the-ethics-of-neurotechnology-lags-behind-the-science-194017">identity theft, password hacking and blackmail</a>. Given how the devices access users’ thoughts, there is also the possibility that <a href="https://doi.org/10.1057/s41599-023-02419-x">their autonomy</a> could be manipulated by third parties. </p>
<p>The ethics of medicine requires physicians to help patients, while minimizing potential harm. In addition to errors and privacy risks, scientists worry about <a href="https://doi.org/10.1038/d41586-024-00304-4">potential adverse effects</a> of a completely implanted device like Neuralink, since device components are not easily replaced after implantation.</p>
<p>When considering any invasive medical intervention, patients, providers and developers seek a balance between risk and benefit. At current levels of safety and reliability, the benefit of a permanent implant would have to be large to justify the uncertain risks.</p>
<h2>What’s next?</h2>
<p>For now, Neuralink’s trials are focused on patients with paralysis. Musk has said his ultimate goal for BCIs, however, is to help humanity – <a href="https://www.vox.com/future-perfect/2019/7/17/20697812/elon-musk-neuralink-ai-brain-implant-thread-robot">including healthy people</a> – “<a href="https://www.technologyreview.com/2020/08/30/1007786/elon-musks-neuralink-demo-update-neuroscience-theater/">keep pace” with artificial intelligence</a>.</p>
<p>This raises questions about another core tenet of medical ethics: <a href="https://link.springer.com/article/10.1007/s41465-018-0108-x">justice</a>. Some types of supercharged brain-computer synthesis could exacerbate social inequalities if only wealthy citizens have access to enhancements.</p>
<p>What is more immediately concerning, however, is the possibility that the device could be increasingly shown to be helpful for people with disabilities, but become unavailable due to loss of research funding. For patients whose access to a device is tied to a research study, the <a href="https://doi.org/10.1016/j.brs.2023.04.016">prospect of losing access after the study ends</a> can be devastating. This raises thorny questions about whether it is ever ethical to <a href="https://doi.org/10.1136/medethics-2016-103868">provide early access</a> to breakthrough medical interventions prior to their receiving full FDA approval.</p>
<p><a href="https://www.researchgate.net/publication/365700467_The_Unique_and_Practical_Advantages_of_Applying_A_Capability_Approach_to_Brain_Computer_Interface">Clear ethical and legal guidelines are needed</a> to ensure the benefits that stem from scientific innovations like Neuralink’s brain chip are balanced against patient safety and societal good.</p><img src="https://counter.theconversation.com/content/222556/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Brain-computer interface devices have the potential to boost users’ autonomy, especially for people who experience paralysis. But that comes with risks, as well.Nancy S. Jecker, Professor of Bioethics and Humanities, School of Medicine, University of WashingtonAndrew Ko, Assistant Professor of Neurological Surgery, School of Medicine, University of WashingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2054462023-08-07T12:44:39Z2023-08-07T12:44:39ZNew neurotechnology is blurring the lines around mental privacy − but are new human rights the answer?<figure><img src="https://images.theconversation.com/files/540894/original/file-20230802-27-zju7b0.jpg?ixlib=rb-1.1.0&rect=4%2C4%2C1017%2C677&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A woman tries out neurotechnology equipment during Tech Week in Bucharest, Romania, in May 2023.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/woman-tries-neuro-technology-equipment-at-the-tech-expo-news-photo/1258279319?adppopup=true">Cristian Cristel/Xinhua via Getty Images</a></span></figcaption></figure><p>Neurotechnologies – devices that interact directly with the brain or nervous system – were once dismissed as the stuff of science fiction. Not anymore. Several companies are developing and some are even testing “<a href="https://theconversation.com/melding-mind-and-machine-how-close-are-we-75589">brain-computer interfaces</a>,” or BCIs, of which the most high-profile is likely Elon Musk’s Neuralink. He announced on Jan. 29, 2024, that the first human in the company’s clinical trials <a href="https://twitter.com/elonmusk/status/1752098683024220632">has received a brain implant</a>.</p>
<p>Like other companies, Neuralink’s immediate goal is <a href="https://apnews.com/article/elon-musk-neuralink-human-brain-implant-e92ca9621c9331487c94e9b537c2d537">to improve autonomy</a> for patients with severe paralysis or other neurological disorders.</p>
<p>But not all BCIs are envisioned for medical use: There are <a href="https://doi.org/10.3389/fninf.2020.553352">EEG headsets</a> that sense electrical activity inside the wearer’s brain <a href="https://unesdoc.unesco.org/ark:/48223/pf0000386137">covering a wide range of applications</a>, from entertainment and wellness to education and the workplace. Yet, Musk’s ambitions go beyond these therapeutic and nonmedical uses. Neuralink aims to eventually help people “<a href="https://twitter.com/neuralink/status/1648478559093264387">surpass able-bodied human performance</a>.”</p>
<p>Neurotechnology research and patents have soared at least twentyfold over the past two decades, <a href="https://unesdoc.unesco.org/ark:/48223/pf0000386137">according to a United Nations report</a>, and devices are getting more powerful. Newer devices have the potential to <a href="https://theconversation.com/helping-or-hacking-engineers-and-ethicists-must-work-together-on-brain-computer-interface-technology-77759">collect data from the brain and other parts of the nervous system</a> more directly, with higher resolution, in greater amounts and in more pervasive ways.</p>
<p>However, these improvements have also raised concerns about mental privacy and human autonomy – questions I think about in my research on the <a href="https://rockethics.psu.edu/people/laura-cabrera/">ethical and social implications of brain science and neural engineering</a>. Who owns the generated data, and who should get access? Could this type of device threaten individuals’ ability to make independent decisions? </p>
<p>In July 2023, the U.N. agency for science and culture held a <a href="https://www.unesco.org/en/articles/ethics-neurotechnology-unesco-leaders-and-top-experts-call-solid-governance">conference on the ethics of neurotechnology</a>, calling for a framework to protect human rights. Some critics have even argued that societies should recognize a new category of human rights, “<a href="https://neurorightsfoundation.org/mission">neurorights</a>.” In 2021, Chile became <a href="https://doi.org/10.1007/s00146-022-01396-0">the first country</a> whose constitution addresses concerns about neurotechnology. </p>
<p>Advances in neurotechnology do raise important privacy concerns. However, I believe these debates can overlook more fundamental threats to privacy.</p>
<h2>A glimpse inside</h2>
<p>Concerns about neurotechnology and privacy focus on the idea that an observer can “read” a person’s thoughts and feelings just from recordings of their brain activity. </p>
<p>It is true that some neurotechnologies can record brain activity with great specificity: for example, developments on <a href="https://doi.org/10.1038/s41551-019-0407-2">high-density electrode arrays</a> that allow for high-resolution recording from multiple parts of the brain.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/540896/original/file-20230802-28078-h65qq1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Someone standing outside the frame adjusts a glowing monitor hooked up to a computer." src="https://images.theconversation.com/files/540896/original/file-20230802-28078-h65qq1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/540896/original/file-20230802-28078-h65qq1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/540896/original/file-20230802-28078-h65qq1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/540896/original/file-20230802-28078-h65qq1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/540896/original/file-20230802-28078-h65qq1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/540896/original/file-20230802-28078-h65qq1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/540896/original/file-20230802-28078-h65qq1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Paradromics, an Austin-based company, is developing a brain-computer interface to aide disabled and nonverbal patients with communication.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/aamir-ahmed-khan-phd-principal-electrical-engineer-for-news-photo/1247658566?adppopup=true">Julia Robinson for The Washington Post via Getty Images</a></span>
</figcaption>
</figure>
<p>Researchers can make inferences about mental phenomena and interpret behavior based on this kind of information. However, “reading” the recorded brain activity is not straightforward. Data has already gone through filters and algorithms before the human eye gets the output.</p>
<p>Given these complexities, my colleague <a href="https://infosci.cornell.edu/content/susser">Daniel Susser</a> and I wrote an article in the <a href="https://doi.org/10.1080/21507740.2023.2188275">American Journal of Bioethics – Neuroscience</a> asking whether some worries around mental privacy might be misplaced. </p>
<p>While neurotechnologies do raise significant privacy concerns, we argue that the risks are similar to those for more familiar data-collection technologies, such as everyday <a href="https://www.businessnewsdaily.com/10625-businesses-collecting-data.html">online surveillance</a>: the kind most people experience through internet browsers and advertising, or wearable devices. Even browser histories on personal computers are capable of revealing highly sensitive information.</p>
<p>It is also worth remembering that a key aspect of being human has always been inferring other people’s behaviors, thoughts and feelings. Brain activity alone does not tell the full story; other behavioral or physiological measures are also needed to reveal this type of information, as well as social context. A certain surge in brain activity might indicate either fear or excitement, for example.</p>
<p>However, that is not to say there’s no cause for concern. Researchers are exploring new directions in which multiple sensors – such as headbands, wrist sensors and room sensors – can be used to capture multiple kinds of behavioral and environmental data. Artificial intelligence could be used to combine that data into <a href="https://braininitiative.nih.gov/news-events/blog/nih-issues-new-funding-opportunity-establish-data-coordination-and-artificial">more powerful interpretations</a>. </p>
<h2>Think for yourself?</h2>
<p>Another thought-provoking debate around neurotechnology deals with cognitive liberty. According to the <a href="https://web.archive.org/web/20120206215115/http:/www.cognitiveliberty.org/faqs/faq_general.htm">Center for Cognitive Liberty & Ethics</a>, founded in 1999, the term refers to “the right of each individual to think independently and autonomously, to use the full power of his or her mind, and to engage in multiple modes of thought.”</p>
<p>More recently, other researchers have resurfaced the idea, such as in legal scholar <a href="https://law.duke.edu/fac/farahany/">Nita Farahany’s</a> book “<a href="https://us.macmillan.com/books/9781250272966/thebattleforyourbrain">The Battle for Your Brain</a>.” Proponents of cognitive liberty argue broadly for the need to protect individuals from having their mental processes manipulated or monitored without their consent. They argue that greater regulation of neurotechnology may be required to protect individuals’ freedom to determine their own inner thoughts and to control their own mental functions.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/540895/original/file-20230802-27-br3v8a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man in a gray turtleneck stands with what looks like a black and white bike helmet on his head." src="https://images.theconversation.com/files/540895/original/file-20230802-27-br3v8a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/540895/original/file-20230802-27-br3v8a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=422&fit=crop&dpr=1 600w, https://images.theconversation.com/files/540895/original/file-20230802-27-br3v8a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=422&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/540895/original/file-20230802-27-br3v8a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=422&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/540895/original/file-20230802-27-br3v8a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=531&fit=crop&dpr=1 754w, https://images.theconversation.com/files/540895/original/file-20230802-27-br3v8a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=531&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/540895/original/file-20230802-27-br3v8a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=531&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Seung Wan Kang, founder and CEO of iMediSync Inc., displays the company’s iSyncWave, which allows people to measure their brainwaves at home, at CES 2023 in Las Vegas.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/founder-and-ceo-of-imedisync-inc-dr-seung-wan-kang-displays-news-photo/1454097687?adppopup=true">Ethan Miller/Getty Images</a></span>
</figcaption>
</figure>
<p>These are important freedoms, and there are certainly specific features – like those of novel BCI neurotechnology and nonmedical neurotechnology applications – that prompted important questions. Yet I would argue that the way cognitive freedom is discussed in these debates sees each individual person as an isolated, independent agent, <a href="https://doi.org/10.1057/9781137402240">neglecting the relational aspects</a> of who we are and how we think. </p>
<p>Thoughts do not simply spring out of nothing in someone’s head. For example, part of my mental process as I write this article is recollecting and reflecting on research from colleagues. I’m also reflecting on my own experiences: the many ways that who I am today is the combination of my upbringing, the society I grew up in, the schools I attended. Even the ads my web browser pushes on me can shape my thoughts.</p>
<p>How much are our thoughts uniquely ours? How much are my mental processes already being manipulated by other influences? And keeping that in mind, how should societies protect privacy and freedom?</p>
<p>I believe that acknowledging the extent to which our thoughts are already shaped and monitored by many different forces can help set priorities as neurotechnologies and AI become more common. Looking beyond novel technology to strengthen current privacy laws may give a more holistic view of the many threats to privacy, and what freedoms need defending.</p>
<p><em>This is an updated version of an article originally published on Aug. 7, 2023.</em></p><img src="https://counter.theconversation.com/content/205446/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Laura Y. Cabrera receives funding from National Institutes of Health, and the National Network Depression Centers. She is affiliated with IEEE, and the International Neuroethics Society. </span></em></p>More invasive devices have prompted new debates about privacy and freedom. But it’s important to keep in mind that other technologies already sense and shape our thoughts, a neuroethicist argues.Laura Y. Cabrera, Associate Professor of Neuroethics, Penn StateLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2046912023-05-01T20:00:25Z2023-05-01T20:00:25ZHow close are we to reading minds? A new study decodes language and meaning from brain scans<figure><img src="https://images.theconversation.com/files/523537/original/file-20230501-4586-xrem6f.jpeg?ixlib=rb-1.1.0&rect=56%2C42%2C9432%2C6274&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>The technology to decode our thoughts is drawing ever closer. Neuroscientists at the University of Texas have for the first time decoded data from non-invasive brain scans and used them to reconstruct language and meaning from stories that people hear, see or even imagine.</p>
<p>In a <a href="https://www.nature.com/articles/s41593-023-01304-9">new study published in Nature Neuroscience</a>, Alexander Huth and colleagues successfully recovered the gist of language and sometimes exact phrases from <a href="https://www.radiologyinfo.org/en/info/fmribrain">functional magnetic resonance imaging</a> (fMRI) brain recordings of three participants. </p>
<p>Technology that can create language from brain signals could be enormously useful for people who cannot speak due to conditions such as <a href="https://www.nhsinform.scot/illnesses-and-conditions/brain-nerves-and-spinal-cord/motor-neurone-disease-mnd">motor neurone disease</a>. At the same time, it raises concerns for the future privacy of our thoughts.</p>
<h2>Language decoded</h2>
<p><a href="https://medium.com/mlearning-ai/decoding-strategies-in-language-modelling-c5752710e31a">Language decoding models</a>, also called “speech decoders”, aim to use recordings of a person’s brain activity to discover the words they hear, imagine or say. </p>
<p>Until now, speech decoders have only been used with data from devices surgically implanted in the brain, which limits their usefulness. Other decoders which used non-invasive brain activity recordings have been able to decode single words or short phrases, but not continuous language.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/weve-been-connecting-brains-to-computers-longer-than-youd-expect-these-3-companies-are-leading-the-way-197023">We've been connecting brains to computers longer than you’d expect. These 3 companies are leading the way</a>
</strong>
</em>
</p>
<hr>
<p>The new research used the <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4147398/">blood oxygen level dependent signal</a> from fMRI scans, which shows changes in blood flow and oxygenation levels in different parts of the brain. By focusing on patterns of activity in brain regions and networks that process language, the researchers found their decoder could be trained to reconstruct continuous language (including some specific words and the general meaning of sentences).</p>
<p>Specifically, the decoder took the brain responses of three participants as they listened to stories, and generated sequences of words that were likely to have produced those brain responses. These word sequences did well at capturing the general gist of the stories, and in some cases included exact words and phrases. </p>
<p><iframe id="0Gv85" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/0Gv85/2/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>The researchers also had the participants watch silent movies and imagine stories while being scanned. In both cases, the decoder often managed to predict the gist of the stories. </p>
<p>For example, one user thought “I don’t have my driver’s licence yet”, and the decoder predicted “she has not even started to learn to drive yet”.</p>
<p>Further, when participants actively listened to one story while ignoring another story played simultaneously, the decoder could identify the meaning of the story being actively listened to.</p>
<h2>How does it work?</h2>
<p>The researchers started out by having each participant lie inside an fMRI scanner and listen to 16 hours of narrated stories while their brain responses were recorded.</p>
<p>These brain responses were then used to train an <a href="https://medium.com/towards-data-science/understanding-encoder-decoder-sequence-to-sequence-model-679e04af4346">encoder</a> – a computational model that tries to predict how the brain will respond to words a user hears. After training, the encoder could quite accurately predict how each participant’s brain signals would respond to hearing a given string of words.</p>
<p>However, going in the opposite direction – from recorded brain responses to words – is trickier. </p>
<p>The encoder model is designed to link brain responses with “semantic features” or the broad meanings of words and sentences. To do this, the system uses the <a href="https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf">original GPT language model</a>, which is the precursor of today’s GPT-4 model. The decoder then generates sequences of words that might have produced the observed brain responses. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/523578/original/file-20230501-24-jsij3z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A table showing stills from an animated film next to descriptions of the action decoded from fMRI scans." src="https://images.theconversation.com/files/523578/original/file-20230501-24-jsij3z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/523578/original/file-20230501-24-jsij3z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=548&fit=crop&dpr=1 600w, https://images.theconversation.com/files/523578/original/file-20230501-24-jsij3z.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=548&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/523578/original/file-20230501-24-jsij3z.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=548&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/523578/original/file-20230501-24-jsij3z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=689&fit=crop&dpr=1 754w, https://images.theconversation.com/files/523578/original/file-20230501-24-jsij3z.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=689&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/523578/original/file-20230501-24-jsij3z.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=689&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The decoder could also describe the action when participants watched silent movies.</span>
<span class="attribution"><a class="source" href="https://www.nature.com/articles/s41593-023-01304-9">Tang et al. / Nature Neuroscience</a></span>
</figcaption>
</figure>
<p>The accuracy of each “guess” is then checked by using it to predict previously recorded brain activity, with the prediction then compared to the actual recorded activity. </p>
<p>During this resource-intensive process, multiple guesses are generated at a time, and ranked in order of accuracy. Poor guesses are discarded and good ones kept. The process continues by guessing the next word in the sequence, and so on until the most accurate sequence is determined.</p>
<h2>Words and meanings</h2>
<p>The study found data from multiple, specific brain regions – including the speech network, the parietal-temporal-occipital association region, and prefrontal cortex – were needed for the most accurate predictions. </p>
<p>One key difference between this work and earlier efforts is the data being decoded. Most decoding systems link brain data to motor features or activity recorded from brain regions involved in the last step of speech output, the movement of the mouth and tongue. This decoder works instead at the level of ideas and meanings.</p>
<p>One limitation of using fMRI data is its low “temporal resolution”. The blood oxygen level dependent signal rises and falls over approximately a 10-second period, during which time a person might have heard 20 or more words. As a result, this technique cannot detect individual words, but only the potential meanings of sequences of words. </p>
<h2>No need for privacy panic (yet)</h2>
<p>The idea of technology that can “read minds” raises concerns over mental privacy. The researchers conducted additional experiments to address some of these concerns.</p>
<p>These experiments showed we don’t need to worry just yet about having our thoughts decoded while we walk down the street, or indeed without our extensive cooperation.</p>
<p>A decoder trained on one person’s thoughts performed poorly when predicting the semantic detail from another participant’s data. What’s more, participants could disrupt the decoding by diverting their attention to a different task such as naming animals or telling a different story.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/our-neurodata-can-reveal-our-most-private-selves-as-brain-implants-become-common-how-will-it-be-protected-197047">Our neurodata can reveal our most private selves. As brain implants become common, how will it be protected?</a>
</strong>
</em>
</p>
<hr>
<p>Movement in the scanner can also disrupt the decoder as fMRI is highly sensitive to motion, so participant cooperation is essential. Considering these requirements, and the need for high-powered computational resources, it is highly unlikely that someone’s thoughts could be decoded against their will at this stage.</p>
<p>Finally, the decoder does not currently work on data other than fMRI, which is an expensive and often impractical procedure. The group plans to test their approach on other non-invasive brain data in the future.</p><img src="https://counter.theconversation.com/content/204691/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Christina Maher does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>American scientists have used brain scans and machine learning to reconstruct the meaning of stories that people hear, see, or even imagine.Christina Maher, Computational Neuroscientist and Biomedical Engineer, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1970472023-02-13T19:12:45Z2023-02-13T19:12:45ZOur neurodata can reveal our most private selves. As brain implants become common, how will it be protected?<figure><img src="https://images.theconversation.com/files/509677/original/file-20230213-22-twr89b.jpeg?ixlib=rb-1.1.0&rect=33%2C88%2C7315%2C4429&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>“Hello world!”</p>
<p>On December 2021, these were the <a href="https://www.businesswire.com/news/home/20211222005557/en/Synchron-Announces-First-Direct-Thought-Tweet-%E2%80%9CHello-World%E2%80%9D-Using-an-Implantable-Brain-Computer-Interface">first words tweeted</a> by a paralysed man using only his thoughts and a brain-computer interface (BCI) implanted by the company Synchron.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1473805676086370304"}"></div></p>
<p>For millions living with paralysis, epilepsy and neuromuscular conditions, BCIs offer restored movement and, more recently, thought-to-text capabilities.</p>
<p>So far, few invasive (implanted) versions of the technology have been <a href="https://www.neuropace.com/press-release/fda-breakthrough-device-designation-for-rns-system/">commercialised</a>. But a number of companies are determined to change this. </p>
<p>Synchron is joined by Elon Musk’s Neuralink, which has documented a <a href="https://theconversation.com/neuralinks-monkey-can-play-pong-with-its-mind-imagine-what-humans-could-do-with-the-same-technology-158787">monkey playing the computer game Pong</a> using its BCI – as well as the newer <a href="https://precisionneuro.io/">Precision Neuroscience</a>, which <a href="https://www.globenewswire.com/news-release/2023/01/25/2595114/0/en/Precision-Neuroscience-Raises-41-Million-to-Build-and-Scale-the-Next-Generation-of-Treatments-for-Neurological-Illnesses.html">recently raised</a> US$41 million towards building a reversible implant thinner than a human hair. </p>
<p>Eventually, BCIs will allow people to carry out a range of tasks using their thoughts. But is this terrific, or terrifying? </p>
<h2>How do BCIs work?</h2>
<p>BCIs can be non-invasive (wearable) or invasive (implanted). Electrical activity is the most commonly captured “neurodata”, with invasive BCIs providing better signal quality than non-invasive ones.</p>
<p>The functionality of most BCIs can be summarised as passive, active and reactive. All BCIs use <a href="https://theconversation.com/signal-processing-a-field-at-the-heart-of-science-and-everyday-life-89267">signal processing</a> to filter brain signals. After processing, active and reactive BCIs can return outputs in response to a user’s voluntary brain activity.</p>
<p>Signals from specific brain regions are considered a combination of many tiny signals from multiple regions. So BCIs use <a href="https://recfaces.com/articles/pattern-regognition">pattern recognition algorithms</a> to decipher a signal’s potential origins and link it to an intentional event, such as a task or thought. </p>
<p>One of the <a href="https://www.neuropace.com/patients/rns-vs-vns-epilepsy-treatment/">first implanted BCIs</a> treated drug-resistant seizures in some of the 50 million people with epilepsy. And ongoing clinical trials <a href="https://jamanetwork.com/journals/jama/fullarticle/2800761">signal</a> a new era for neurologically and physically impaired people. </p>
<p>Outside the clinical realm, however, neurodata exist in a largely unregulated space.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/elon-musk-claims-his-neuralink-brain-chip-could-cure-tinnitus-in-5years-but-dont-hold-your-breath-182156">Elon Musk claims his Neuralink brain chip could 'cure' tinnitus in 5 years. But don't hold your breath</a>
</strong>
</em>
</p>
<hr>
<h2>An unknown middleman</h2>
<p>In human interaction, thoughts are interpreted by the person experiencing and communicating them, and separately by the person receiving the communication. In this sense, allowing algorithms to interpret our thoughts could be likened to another entity “speaking” for us.</p>
<p>This could raise issues in a future where thought-to-text is widespread. For example, a BCI may generate the output “I’m good”, when the user intended it to be “I’m great”. These are similar, but they aren’t the same. It’s easy enough for an able-bodied person to physically correct the mistake – but for people who can only communicate through BCIs, there’s a risk of being misinterpreted. </p>
<p>Moreover, implanted BCIs can provide rich access to all brain signals; there is no option to pick and choose which signals are shared. </p>
<p>Brain data are arguably our most private data because of what can be inferred regarding our identity and mental state. Yet private BCI companies <a href="https://fpf.org/blog/bcis-data-protection-in-healthcare-data-flows-risks-and-regulations/">may not need to inform users</a> about what data are used to train algorithms, or how the data are linked to interpretations that lead to outputs.</p>
<p>In Australia, strict <a href="https://www.nhmrc.gov.au/about-us/publications/australian-code-responsible-conduct-research-2018#download">data storage rules</a> require that all BCI-related patient data are stored on secure servers in a de-identified form, which helps protect patient privacy. But requirements outside of a research context are unclear.</p>
<h2>What’s at risk if neurodata aren’t protected?</h2>
<p>BCIs are unlikely to launch us into a dystopian world – in part due to current computational constraints. After all, there’s a leap between a BCI sending a short text and interpreting one’s entire stream of consciousness. </p>
<p>That said, making this leap largely comes down to how well we can train algorithms, which requires more data and computing power. The rise of <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8254820/">quantum computing</a> – whenever that may be – could provide these additional computational resources. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/509684/original/file-20230213-24-hw1dri.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/509684/original/file-20230213-24-hw1dri.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/509684/original/file-20230213-24-hw1dri.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/509684/original/file-20230213-24-hw1dri.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/509684/original/file-20230213-24-hw1dri.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/509684/original/file-20230213-24-hw1dri.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=425&fit=crop&dpr=1 754w, https://images.theconversation.com/files/509684/original/file-20230213-24-hw1dri.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=425&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/509684/original/file-20230213-24-hw1dri.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=425&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Current BCIs aren’t advanced enough to quickly and reliably interpret a stream of thoughts — but a growth in computational power may allow this in the future.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>Cathy O'Neil’s 2016 book, <a href="https://blogs.scientificamerican.com/roots-of-unity/review-weapons-of-math-destruction/">Weapons of Math Destruction</a>, highlights how algorithms that measure complex concepts such as human qualities could let predatory entities make important decisions for the most vulnerable people.</p>
<p>Here are some hypothetical worst-case scenarios.</p>
<ol>
<li><p>Third-party companies might buy neurodata from BCI companies and use it to make decisions, such as whether someone is granted a loan or access to health care.</p></li>
<li><p>Courts might be allowed to <a href="https://link.springer.com/article/10.1007/s00146-022-01396-0">order neuromonitoring</a> of individuals with the potential to commit crimes, based on their previous history or socio-demographic environment.</p></li>
<li><p>BCIs specialised for “neuroenhancement” could be made a condition of employment, such as in the <a href="https://theconversation.com/brain-computer-interfaces-could-allow-soldiers-to-control-weapons-with-their-thoughts-and-turn-off-their-fear-but-the-ethics-of-neurotechnology-lags-behind-the-science-194017">military</a>. This would blur the boundaries between human reasoning and algorithmic influence. </p></li>
<li><p>As with all industries where data privacy is critical, there is a genuine risk of neurodata hacking, where cybercriminals access and exploit brain data.</p></li>
</ol>
<p>Then there are subtler examples, including the potential for bias. In the future, bias may be introduced into BCI technologies in a number of ways, including through:</p>
<ul>
<li><p>the selection of homogeneous training data</p></li>
<li><p>a lack of diversity among clinical trial participants (especially in control groups)</p></li>
<li><p>a lack of diversity in the teams that design the algorithms and software.</p></li>
</ul>
<p>If BCIs are to cater to diverse users, then diversity will need to be factored into every stage of development.</p>
<h2>How can we protect neurodata?</h2>
<p>The vision for “<a href="https://www.frontiersin.org/articles/10.3389/fnhum.2021.701258/full">neurorights</a>” is an evolving space. The ethical challenges lie in the balance between choosing what is best for individuals and what is best for society at large. </p>
<p>For instance, should individuals in the military be equipped with neuroenhancing devices so they can better serve their country and protect themselves on the front lines, or would that compromise their individual identity and privacy? And which legislation should capture neurorights: data protection law, health law, consumer law, or criminal law?</p>
<p>In a world first, <a href="https://neurorightsfoundation.org/chile">Chile</a> passed a neurorights law in 2021 to protect mental privacy, by explicitly classifying mental data and brain activity as a human right to be legally protected. Though a step in the right direction, it remains unclear how such a law would be enforced. </p>
<p>One US-based patient group is taking matters into its own hands. The <a href="https://www.bcipioneers.org/">BCI Pioneers</a> is an advocate group ensuring the conversation around neuroethics is patient-led.</p>
<p>Other efforts include the <a href="https://neurorightsfoundation.org/">Neurorights Foundation</a>, and the proposal of a “<a href="https://link.springer.com/chapter/10.1007/978-3-030-94032-4_14">technocratic oath</a>” modelled on the Hippocratic oath taken by medical doctors. An International Organisation for Standardisation <a href="https://www.iso.org/committee/9082407.html">committee</a> for BCI standards is also under way. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/neuralink-put-a-chip-in-gertrude-the-pigs-brain-it-might-be-useful-one-day-145383">Neuralink put a chip in Gertrude the pig's brain. It might be useful one day</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/197047/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Christina Maher does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Several companies are in the process of commercialising invasive brain-computer interfaces, including Synchron, Neuralink and Precision Neuroscience.Christina Maher, Researcher, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1940172022-12-02T13:42:14Z2022-12-02T13:42:14ZBrain-computer interfaces could allow soldiers to control weapons with their thoughts and turn off their fear – but the ethics of neurotechnology lags behind the science<figure><img src="https://images.theconversation.com/files/498321/original/file-20221130-26-kthfq4.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2100%2C1427&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Brain-computer interfaces raise many ethical questions about how and whether they should be used for certain applications.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/brain-and-chip-royalty-free-illustration/1405017412">Wenjin Chen/DigitalVision Vectors via Getty Images</a></span></figcaption></figure><p>Imagine that a soldier has a <a href="https://www.battelle.org/insights/newsroom/press-release-details/battelle-led-team-wins-darpa-award-to-develop-injectable-bi-directional-brain-computer-interface">tiny computer device injected</a> into their bloodstream that can be guided with a magnet to specific regions of their brain. With training, the soldier could then control weapon systems thousands of miles away using their thoughts alone. Embedding a similar type of computer in a soldier’s brain could <a href="https://doi.org/10.1038/s41591-020-01175-8">suppress their fear and anxiety</a>, allowing them to carry out combat missions more efficiently. Going one step further, a device equipped with an <a href="https://doi.org/10.21037%2Fatm.2019.11.109">artificial intelligence system</a> could directly control a soldier’s behavior by predicting what options they would choose in their current situation.</p>
<p>While these examples may sound like science fiction, the science to develop neurotechnologies like these is already in development. <a href="https://doi.org/10.3389/fnsys.2021.578875">Brain-computer interfaces</a>, or BCI, are technologies that decode and transmit brain signals to an external device to carry out a desired action. Basically, a user would only need to think about what they want to do, and a computer would do it for them.</p>
<p>BCIs are currently being tested in people with <a href="https://doi.org/10.1038/s41586-021-03506-2">severe neuromuscular disorders</a> to help them recover everyday functions like communication and mobility. For example, patients can turn on a light switch by visualizing the action and having a BCI decode their brain signals and transmit it to the switch. Likewise, patients can focus on specific letters, words or phrases on a computer screen that a BCI can move a cursor to select.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/YHFx6O5x5Hw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Researchers are looking into ways to directly translate brain signals into synthesized speech.</span></figcaption>
</figure>
<p>However, <a href="https://doi.org/10.1038/551159a">ethical considerations</a> have not kept pace with the science. While ethicists have <a href="https://bioethicsarchive.georgetown.edu/pcsbi/sites/default/files/GrayMatter_V2_508.pdf">pressed for more ethical inquiry</a> into neural modification in general, many practical questions around brain-computer interfaces have not been fully considered. For example, do the benefits of BCI outweigh the substantial risks of brain hacking, information theft and behavior control? Should BCI be used to curb or enhance specific emotions? What effect would BCIs have on the moral agency, personal identity and mental health of their users?</p>
<p>These questions are of great interest to us, a <a href="https://scholar.google.com/citations?user=WL2cyzEAAAAJ&hl=en">philosopher</a> and <a href="https://scholar.google.com/citations?user=78GnqoAAAAAJ&hl=en">neurosurgeon</a> who study the ethics and science of current and future BCI applications. Considering the ethics of using this technology before it is implemented could prevent its potential harm. We argue that responsible use of BCI requires safeguarding people’s ability to function in a range of ways that are considered <a href="https://doi.org/10.1007/s13347-022-00597-1">central to being human</a>.</p>
<h2>Expanding BCI beyond the clinic</h2>
<p>Researchers are exploring <a href="https://doi.org/10.1093/acprof:oso/9780195388855.003.0023">nonmedical brain-computer interface applications</a> in many fields, including gaming, virtual reality, artistic performance, warfare and air traffic control. </p>
<p>For example, <a href="https://neuralink.com/">Neuralink</a>, a company co-founded by Elon Musk, is <a href="https://doi.org/10.2196/16194">developing a brain implant</a> for healthy people to potentially <a href="https://theconversation.com/neuralink-wants-to-wire-your-brain-to-the-internet-what-could-possibly-go-wrong-76180">communicate wirelessly</a> with anyone with a similar implant and computer setup.</p>
<p>In 2018, the U.S. military’s <a href="https://www.darpa.mil/news-events/2018-03-16">Defense Advanced Research Projects Agency</a> launched a program to develop “a safe, portable neural interface system capable of reading from and writing to multiple points in the brain at once.” Its aim is to produce nonsurgical BCI for able-bodied service members for national security applications by 2050. For example, a soldier in a special forces unit could use BCI to send and receive thoughts with a fellow soldier and unit commander, a form of <a href="https://apps.dtic.mil/sti/pdfs/AD1083010.pdf">direct three-way communication</a> that would enable real-time updates and more rapid response to threats.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/p1XQ4uxqxZI?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Brain-computer interfaces can allow people to perform certain tasks by merely thinking about them.</span></figcaption>
</figure>
<p>To our knowledge, these projects have not opened a public discussion about the ethics of these technologies. While the U.S. military <a href="https://apps.dtic.mil/sti/pdfs/AD1083010.pdf">acknowledges</a> that “negative public and social perceptions will need to be overcome” to successfully implement BCI, practical <a href="https://doi.org/10.1186%2Fs12910-017-0220-y">ethical guidelines are needed</a> to better evaluate proposed neurotechnologies before deploying them.</p>
<h2>Utilitarianism</h2>
<p>One approach to tackling the ethical questions BCI raises is <a href="https://plato.stanford.edu/entries/consequentialism/">utilitarian</a>. Utilitarianism is an ethical theory that strives to maximize the happiness or well-being of everyone affected by an action or policy.</p>
<p>Enhancing soldiers might create the greatest good by improving a nation’s warfighting abilities, protecting military assets by keeping soldiers remote, and maintaining military readiness. Utilitarian defenders of neuroenhancement argue that emergent technologies like BCI are <a href="https://doi.org/10.1093/oso/9780190651145.003.0002">morally equivalent</a> to other widely accepted forms of brain enhancement. For example, stimulants like caffeine can improve the brain’s processing speed and may <a href="https://doi.org/10.1038/nn.3623">improve memory</a>.</p>
<p>However, <a href="https://doi.org/10.1007/s13347-022-00597-1">some worry</a> that utilitarian approaches to BCI have moral blind spots. In contrast to medical applications designed to help patients, military applications are designed to help a nation win wars. In the process, BCI may ride roughshod over individual rights, such as the right to be mentally and emotionally healthy.</p>
<p>For example, soldiers operating drone weaponry in remote warfare today report <a href="https://www.nytimes.com/2022/04/15/us/drones-airstrikes-ptsd.html">higher levels of emotional distress</a>, post-traumatic stress disorder and broken marriages compared to soldiers on the ground. Of course, soldiers routinely elect to sacrifice for the greater good. But if neuroenhancing becomes a job requirement, it could raise unique <a href="https://doi.org/10.1093/oso/9780190651145.003.0016">concerns about coercion</a>.</p>
<h2>Neurorights</h2>
<p>Another approach to the ethics of BCI, <a href="https://doi.org/10.3389%2Ffnhum.2021.701258">neurorights</a>, prioritizes certain ethical values even if doing so does not maximize overall well-being. </p>
<p>Proponents of neurorights champion individuals’ rights to <a href="https://doi.org/10.1186/s40504-017-0050-1">cognitive liberty, mental privacy, mental integrity and psychological continuity</a>. A right to cognitive liberty might bar unreasonable interference with a person’s mental state. A right to mental privacy might require ensuring a protected mental space, while a right to mental integrity would prohibit specific harms to a person’s mental states. Lastly, a right to psychological continuity might protect a person’s ability to maintain a coherent sense of themselves over time. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/498320/original/file-20221130-26-p9zqac.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Person using a brain-computer interface, wearing an EEG cap connected to a laptop" src="https://images.theconversation.com/files/498320/original/file-20221130-26-p9zqac.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/498320/original/file-20221130-26-p9zqac.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/498320/original/file-20221130-26-p9zqac.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/498320/original/file-20221130-26-p9zqac.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/498320/original/file-20221130-26-p9zqac.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/498320/original/file-20221130-26-p9zqac.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/498320/original/file-20221130-26-p9zqac.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Brain-computer interfaces can take different forms, such as an EEG cap or implant in the brain.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/brain-computer-interface-lab-equipments-royalty-free-image/1199869919">oonal/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>BCIs could interfere with neurorights in a variety of ways. For example, if a BCI tampers with how the world seems to a user, they might not be able to distinguish their own thoughts or emotions from altered versions of themselves. This may violate neurorights like mental privacy or mental integrity.</p>
<p>Yet soldiers already forfeit similar rights. For example, the U.S. military is allowed to <a href="https://www.mtsu.edu/first-amendment/article/1131/rights-of-military-personnel">restrict soldiers’ free speech and free exercise of religion</a> in ways that are not typically applied to the general public. Would infringing neurorights be any different? </p>
<h2>Human capabilities</h2>
<p>A <a href="https://plato.stanford.edu/entries/capability-approach/">human capability approach</a> insists that safeguarding certain human capabilities is crucial to protecting human dignity. While neurorights home in on an individual’s capacity to think, a capability view considers a <a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674072350">broader range of what people can do and be</a>, such as the ability to be emotionally and physically healthy, move freely from place to place, relate with others and nature, exercise the senses and imagination, feel and express emotions, play and recreate, and regulate the immediate environment. </p>
<p>We find a capability approach compelling because it gives a more robust picture of humanness and respect for human dignity. Drawing on this view, <a href="https://doi.org/10.1007/s13347-022-00597-1">we have argued</a> that proposed BCI applications must reasonably protect all of a user’s central capabilities at a minimal threshold. BCI designed to enhance capabilities beyond average human capacities would need to be deployed in ways that realize the user’s goals, not just other people’s.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/K8uijjp6hfc?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Neural interfaces like BCI raise questions about how far development can or should be taken.</span></figcaption>
</figure>
<p>For example, a bidirectional BCI that not only extracts and processes brain signals but delivers somatosensory feedback, such as sensations of pressure or temperature, back to the user would pose unreasonable risks if it disrupts a user’s ability to trust their own senses. Likewise, any technology, including BCIs, that controls a user’s movements would infringe on their dignity if it does not allow the user some ability to override it.</p>
<p>A limitation of a capability view is that it can be difficult to define what counts as a threshold capability. The view does not describe which new capabilities are worth pursuing. Yet, neuroenhancement could alter what is considered a standard threshold, and could eventually introduce entirely new human capabilities. Addressing this requires supplementing a capability approach with a fuller ethical analysis designed to answer these questions.</p><img src="https://counter.theconversation.com/content/194017/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>From warfare to entertainment and VR, brain-computer interface development has extended beyond prosthetics for patients with disabilities. Missing is full ethical consideration of the consequences.Nancy S. Jecker, Professor of Bioethics and Humanities, School of Medicine, University of WashingtonAndrew Ko, Assistant Professor of Neurological Surgery, School of Medicine, University of WashingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1861192022-07-07T19:52:03Z2022-07-07T19:52:03ZHumans are aggressive, sometimes too much – could ‘moral enhancement’ technologies offer a solution?<figure><img src="https://images.theconversation.com/files/472928/original/file-20220707-15-4ntwzr.jpeg?ixlib=rb-1.1.0&rect=39%2C0%2C3668%2C1756&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>It’s a mistake to think problematic aggression is limited to those with psychiatric disorders. Healthy people have also the capacity for impulsive violence – and resulting “morally” poor behaviour.</p>
<p>Traditionally, moral development has been facilitated by social institutions such as religion, education and societal convention. But technology could change this. </p>
<p>If scientists could identify the predictors of reactive aggression, bio-medicine may offer ways to improve the moral behaviour of those more at risk of problematic aggression.</p>
<p>This concept of “moral enhancement” is strongly contested. Bioethicists ask: can, and <em>should</em>, biomedical interventions be used to make people “morally” better? </p>
<p>We need a lot more research before we can weigh up the practical and ethical feasibility of aggression-reducing techniques. But exploration in this space is well under way. </p>
<h2>What is ‘moral enhancement’?</h2>
<p>Broadly, moral enhancement refers to the use of bio-medicine to improve moral functioning. Some suggested methods include decreasing bias, increasing empathy, improving self-control and enhancing intelligence.</p>
<p>While this may seem like science fiction, consider the other types of human enhancement that already exist. </p>
<p>Transhumanists are acquiring new modes of perception through seismic sensors, neural implants and magnetoreception devices. <a href="https://theconversation.com/mind-bending-drugs-and-devices-can-they-make-us-smarter-91696">Smart drugs</a> are used for purported cognitive benefits such as memory and alertness – and <a href="https://theconversation.com/melding-mind-and-machine-how-close-are-we-75589">brain-computer interfaces</a> are fusing mind and machine. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/super-intelligence-and-eternal-life-transhumanisms-faithful-follow-it-blindly-into-a-future-for-the-elite-78538">Super-intelligence and eternal life: transhumanism's faithful follow it blindly into a future for the elite</a>
</strong>
</em>
</p>
<hr>
<p>It’s not a huge leap, then, to imagine we could target the biological processes that mediate our social behaviours. </p>
<p>Of course, moral enhancement is controversial, and bioethicists disagree over its feasibility and ethical implications. Could it work? And under what conditions (if any) might it be justified? </p>
<p><a href="https://link.springer.com/article/10.1007/s12152-022-09501-2">My latest</a> research explores a proposal I think is underappreciated: that moral outcomes could be improved by reducing aggression. </p>
<h2>Everyday aggression</h2>
<p>Aggressive disorders have long been treated by medical practitioners. But this is usually confined to psychiatric cases, and we know aggression is more widespread than clinical and forensic statistics reflect. </p>
<p><a href="https://onlinelibrary.wiley.com/doi/10.1002/9781118303092.ch8">Research indicates</a> only half of non-fatal violence is reported, with around 72% of unreported cases being assaults that don’t cause severe injury. But just because aggression may fall outside a clinical scope, that doesn’t mean it’s not morally problematic. </p>
<p>Everyday aggression plays out in familiar settings. Violence flares up in professional sports. Parental outbursts at youth matches aren’t uncommon; we’ve seen several examples of mums and dads <a href="https://heinonline.org/HOL/Page?handle=hein.journals/vse10&div=11&g_sent=1&casa_token=ihxS3X2nuA0AAAAA:oE0AfGtg3bZRsH_wWD5S6gHkPQHwDz3F5XYbsaHAH2Lo3SZ-nZjAP_PrGy3JTmdZFPSrhivCHw&collection=journals">physically assaulting</a> referees and umpires. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/472930/original/file-20220707-21-af8u55.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="An angry women with dark hair in a driver's seat screams and raises a fist." src="https://images.theconversation.com/files/472930/original/file-20220707-21-af8u55.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/472930/original/file-20220707-21-af8u55.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/472930/original/file-20220707-21-af8u55.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/472930/original/file-20220707-21-af8u55.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/472930/original/file-20220707-21-af8u55.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/472930/original/file-20220707-21-af8u55.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/472930/original/file-20220707-21-af8u55.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">It can come as a shock when seemingly sensible people lose it in traffic,</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>In 2014, one-punch attacks became so frequent in Australia, <a href="https://www.smh.com.au/national/nsw/daniel-christie-latest-fatality-in-epidemic-of-street-violence-20140111-30nmz.html.">media outlets</a> deemed them an “epidemic”. Then there’s <a href="https://www.sciencedirect.com/science/article/pii/S1369847821002667?casa_token=jEY0qCVJR4QAAAAA:vikfwOvO-Yc6YKQtNAyp6hgZbsRvaCCXLCZyF4S4AZF6Nkl0Lii_VllNYzmHMj6VufHvTAkVL7c">road rage</a>, which accounts for numerous cases of injury and property damage each year.</p>
<p>These examples tell us aggression pervades almost every forum of human activity. They suggest otherwise healthy people have the capacity to lose themselves to episodic violence. And perhaps some of us pose a greater hazard than others – without necessarily knowing it. </p>
<p>If we can identify risk-predictors of impulsive aggression, we may be able to prevent some of this spontaneous harm before it’s inflicted.</p>
<h2>How do we classify aggression?</h2>
<p>Psychology defines aggression as any behaviour intended to cause harm. This excludes consensual harm which a person desires for some greater good, such as surgery or tattooing.</p>
<p>Aggression comes in two broad varieties: reactive and instrumental. Reactive aggression is described as “hot-blooded” and involves extreme anger in the face of a threat. Instrumental aggression is “cold-blooded” and involves calculated acts with low emotional arousal. </p>
<p>While both types of aggression can overlap, each has a distinct neurophysiological signature. Reactive aggression activates “primal” parts of the brain, while instrumental aggression recruits more evolved areas in the neocortex.</p>
<p>Morally speaking, there’s reason to think reactive aggression is more hazardous than other forms. That doesn’t mean instrumental aggression isn’t worrisome. In fact, it’s involved in some of the most damaging conditions such as criminal psychopathy.</p>
<p>But reactive aggression is different because it lacks higher-order cognition. It engages the relatively basic limbic system – the region of the brain which deals with behavioural and emotional reactions. It also shuts down the prefrontal cortex, which is responsible for rational decision-making.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/472929/original/file-20220707-12046-kcfozx.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A close up of football players huddled loosely on a field, mid-game" src="https://images.theconversation.com/files/472929/original/file-20220707-12046-kcfozx.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/472929/original/file-20220707-12046-kcfozx.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/472929/original/file-20220707-12046-kcfozx.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/472929/original/file-20220707-12046-kcfozx.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/472929/original/file-20220707-12046-kcfozx.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/472929/original/file-20220707-12046-kcfozx.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/472929/original/file-20220707-12046-kcfozx.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Aggression is a common feature in many sports. It’s not always problematic in this context, but it can be.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>What can be done?</h2>
<p>Precise <a href="https://www.sciencedirect.com/science/article/pii/S2352154621001522?casa_token=aG4XBtgecGwAAAAA:_DTNdre_hnc7zXjzvLjVPX0QgwFjlmZdI9l3pz9zQzNeLfyMxEDOpdkodp7-oSCfn_FBllOBaNA">biomarkers</a> of reactive aggression haven’t yet been established, but scientists have identified some <a href="https://link.springer.com/article/10.1007/s11920-020-01208-6#Abs1">key contributors</a>. These include a range of genes, receptors, neurochemicals related to serotonin and dopamine, hyperactivity of the amygdala, and reduced brain matter in particular regions.</p>
<p>Certain biomedical procedures show promise. Neuromodulation techniques have been found to lower aggression by directly altering brain activity. <a href="https://www.mdpi.com/2077-0383/9/3/882">One example involves</a> a painless procedure in which electrodes are placed on a person’s head to excite or inhibit a specific part of the brain.</p>
<p>Researchers <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8247950/">have suggested</a> we could use such technology on young people with conduct disorders to prevent problematic behaviour in adulthood.</p>
<p>Another emerging technique is <a href="https://mindmedicineaustralia.org.au/what-is-psychedelic-assisted-therapy/#">psychedelic-assisted therapy</a>. Working with therapists, patients use substances such as LSD, MDMA, and psilocybin to access altered states of consciousness and positively shape values, thoughts and behaviour. Early clinical trials have shown impressive results for treating conditions including addiction, depression, and post-traumatic stress disorder. </p>
<p><a href="https://pubmed.ncbi.nlm.nih.gov/35585789/">Gene-based strategies</a> such as <a href="https://www.newscientist.com/definition/what-is-crispr/#">CRISPR</a> also offer hope for therapeutic and enhancement purposes. These work by inserting genetic material into a person’s body to modify or replace unwanted genes. Most gene therapies are still in early trial stages. They’ll need much more evaluation before they can used safely and ethically on humans. </p>
<p>Importantly, there are questions over whether moral enhancement is <a href="https://theconversation.com/common-drugs-can-affect-our-minds-and-morals-but-should-we-be-worried-about-it-44660">already happening</a>, such as when we take drugs that change our brain chemistry. If so, should we simply think of new moral enhancement strategies as a part of existing pre-emptive medical treatments?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/472931/original/file-20220707-26-7zwg0j.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A jar of 'happy pills' sits against a light blue background, with a silver cap unscrewed" src="https://images.theconversation.com/files/472931/original/file-20220707-26-7zwg0j.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/472931/original/file-20220707-26-7zwg0j.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/472931/original/file-20220707-26-7zwg0j.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/472931/original/file-20220707-26-7zwg0j.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/472931/original/file-20220707-26-7zwg0j.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/472931/original/file-20220707-26-7zwg0j.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/472931/original/file-20220707-26-7zwg0j.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">We already know of the many benefits antidepressants provide. Should such medicine be considered a form of ‘moral enhancement’?</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>The barriers</h2>
<p>There are major challenges in implementing any of the above techniques to target aggression. One is non-specificity: the neural structures involved in aggression are also implicated in states such as fear, reward, motivation and threat-detection.</p>
<p>Also, antisocial behaviours can’t simply be associated with one or two genes. They’re a result of a complex genetic architecture in which hundreds of genes, or even thousands, interact with a person’s environment and lifestyle.</p>
<p>Even if we could safely target the determinants of reactive aggression, there are lingering practical and ethical considerations. For one, not all aggression is antisocial. Aggression is often necessary for acts of protection and self-defence. </p>
<p>People can also have mixed motivations, meaning different aggression types can be present in a single act. To complicate things further, some researchers argue for additional classifications such as “micro-”, “prosocial” and “appetitive” aggression.</p>
<p>Any moral enhancement proposals must consider the impact on the person, their character and sense of self. Additionally, there are concerns around autonomy, personal freedom and the possibility of coercive treatment. </p>
<p>These factors would need to be carefully weighed against the potential benefits of moderating an individual’s aggressive tendencies. </p>
<p>Moving forward, we need to learn more about the moral significance of different types of aggression, how they present in an individual’s actions, and how they’re reflected in their biology. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/neuralinks-monkey-can-play-pong-with-its-mind-imagine-what-humans-could-do-with-the-same-technology-158787">Neuralink's monkey can play Pong with its mind. Imagine what humans could do with the same technology</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/186119/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Cohen Marcus Lionel Brown does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Some might argue ‘moral enhancement’ medicine already exists — such as when we take medicine that alters our brain chemistry. Where do we draw a line?Cohen Marcus Lionel Brown, Sessional Academic, University of WollongongLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1130882019-04-02T10:40:48Z2019-04-02T10:40:48ZBrain scan evidence in criminal sentencing: A blessing and a curse<figure><img src="https://images.theconversation.com/files/266421/original/file-20190328-139341-f9fshr.jpg?ixlib=rb-1.1.0&rect=2362%2C0%2C3026%2C2029&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Which way does neurobiological evidence tip the scales in sentencing?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/statue-justice-lady-iustitia-justitia-roman-598268906?src=kZkJ7JPoM75s1nmHUzmaqA-1-1">Alexander Kirch/Shutterstock.com</a></span></figcaption></figure><p>Brain evidence is playing an <a href="https://scholarship.law.duke.edu/faculty_scholarship/3578/">increasing role in criminal trials</a> in the United States. An analysis indicates that brain evidence such as MRI or CAT scans – meant to provide proof of abnormalities, brain damage or disorder in defendants – was used for leniency in approximately 5 percent of murder cases at the appellate level. This number jumps to an astounding 25 percent in death penalty trials. In these cases, the evidence is meant to show that the defendant lacked the capacity to control his action. In essence, “My brain made me do it.” </p>
<p>But does evidence of neurobiological disorder or abnormality tend to help or hurt the defendant? </p>
<p>Legal theorists have <a href="https://doi.org/10.1007/978-3-642-21541-4_19">previously portrayed physical evidence of brain dysfunction</a> as <a href="https://doi.org/10.1177/009885880703300214">a double-edged sword</a>. On the one hand, it might decrease a judge’s or juror’s desire to punish by minimizing the offender’s perceived responsibility for his transgressions. The thinking would be that the crime resulted from disordered brain activity, not any choice on the part of the offender. </p>
<p>On the other hand, brain evidence could increase punitive motivations toward the offender by making him seem more dangerous. That is, if the offender’s brain truly “made him” commit the crime, there is an increased risk such behavior could occur again, even multiple times, in the future. </p>
<p>To tease apart these conflicting motivations, <a href="https://scholar.google.com/citations?user=RKrUH5YAAAAJ&hl=en&oi=ao">our</a> <a href="https://scholar.google.com/citations?user=tnhHv3UAAAAJ&hl=en&oi=ao">team</a> of <a href="https://scholar.google.com/citations?user=0kgaYeoAAAAJ&hl=en&oi=sra">cognitive neuroscientists</a>, a <a href="https://www.bcm.edu/people/view/jennifer-blumenthal-barby-ph-d-m-a/b18fdd06-ffed-11e2-be68-080027880ca6">medical bioethicist</a> and a <a href="https://scholar.google.com/citations?user=lbxbspMAAAAJ&hl=en&oi=sra">philosopher</a> investigated how people tend to <a href="https://doi.org/10.1371/journal.pone.0210584">weigh neurobiological evidence</a> when deciding on criminal sentences.</p>
<h2>Less prison, more involuntary hospitalization</h2>
<p>For this experiment, our team recruited 330 volunteers to read through a criminal case summary describing a defendant found guilty of sexual assault. Before introducing any mental health evidence, we asked for an initial sentence recommendation: If our volunteers were really deciding this case, what would they have wanted to see happen to the defendant? This provided us with a baseline estimate of how much they wanted to punish the defendant.</p>
<p>Next, we filled participants in on the defendant’s mental health status using evidence of an impulse control disorder described either as neurobiological or psychological, and treatable or untreatable. (These experimental conditions were also accompanied by a control condition in which the defendant was deemed healthy.) Participants could then alter their original criminal sentencing judgments by allocating time between prison sentencing and involuntary hospitalization, however they saw fit.</p>
<p>It turned out that neurobiological evidence elicited both shorter prison sentences and longer involuntary hospitalization terms compared to equivalent psychological evidence. That is, for the same mental disorder, people assigned different levels of blame, moral responsibility and punishment based on whether they had a neurologist’s testimony versus a psychologist’s testimony to support the diagnosis.</p>
<p><iframe id="ZwPyH" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/ZwPyH/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>Our key discovery was that when mental health evidence was presented as having a neurobiological cause, laypeople assigned more importance to it. Paradoxically, this effect both favored and disfavored the defendant, depending on the punitive options available. So while prison sentences may be mitigated by the presentation of neurobiological evidence, the same evidence may increase the defendant’s risk of being involuntarily hospitalized.</p>
<p>Treatable disorders elicited both shorter prison sentences and involuntary hospitalization terms compared to untreatable disorders, yet this pattern could not account for the double-edged effect of neurobiological evidence.</p>
<h2>Punishment versus protection</h2>
<p>Previous research has searched for this apparent <a href="https://doi.org/10.1126/science.1219569">double-edged effect of neurobiological evidence</a> – that it may have both aggravating and mitigating effects on criminal sentences. But prior studies have been inconclusive.</p>
<p>One possible reason for previous failures to observe the double-edged effect could be that participants were restricted to simplistic punishment measures. By relying on only a single type of punishment – in most cases, prison sentencing – earlier studies might have missed the dual competing motivations: to get justice versus to protect society. </p>
<p>We accounted for this possibility by offering participants two punishment options: commitment to prison versus mental hospital. That’s how we were able to identify that neurobiological evidence seems likely to result in a shorter prison sentence or a longer involuntary commitment to a mental hospital.</p>
<p>The effects we observed may have far-reaching implications for the law, which regularly confronts questions about the <a href="https://doi.org/10.1038/s41398-018-0274-8">quality and presentation format of mental health evidence</a>. For example, how can policymakers best manage evidentiary presentation bias? Should neurobiological evidence always be accompanied by corresponding psychological or behavioral evidence, or even warnings of potential biasing effects? If a defendant were to be excused in the case of mental illness, should jurors be made aware of treatment options? Should judges receive a legal education on neurobiological evidence?</p>
<p>Brain evidence will likely become even more common in the years ahead, and the judicial system will need to grapple with how best to use it.</p><img src="https://counter.theconversation.com/content/113088/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Corey Hill Allen received support from a grant from the John Templeton Foundation (<a href="http://www.templeton.org">www.templeton.org</a>) via the Summer Seminars on Neuroscience and Philosophy at Duke University (Subaward #: 283-0635). The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the John Templeton Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</span></em></p><p class="fine-print"><em><span>Eyal Aharoni received support from a grant from the John Templeton Foundation (<a href="http://www.templeton.org">www.templeton.org</a>) via the Summer Seminars on Neuroscience and Philosophy at Duke University (Subaward #: 283-0635). The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the John Templeton Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</span></em></p>How do jurors use different kinds of information about mental illness when making sentencing decisions? An experiment finds that neurobiological evidence could harm or help defendants.Corey Hill Allen, Ph.D. Candidate in Neuroscience, Georgia State UniversityEyal Aharoni, Assistant Professor of Psychology, Philosophy, and Neuroscience, Georgia State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/910402018-04-03T10:44:49Z2018-04-03T10:44:49ZIt’s not my fault, my brain implant made me do it<figure><img src="https://images.theconversation.com/files/212717/original/file-20180329-189810-cbug78.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Probes that can transmit electricity inside the skull raise questions about personal autonomy and responsibility.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Tiefe_Hirnstimulation_-_Sonden_RoeSchaedel_seitl.jpg">Hellerhoff</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p><a href="https://www.theguardian.com/music/2014/may/27/johnny-cash-deep-brain-stimulation-urge-listen">Mr. B loves Johnny Cash</a>, except when he doesn’t. Mr. X has <a href="http://www.sciencemag.org/news/2014/04/scienceshot-deep-brain-stimulation-triggers-hallucinations">watched his doctors morph into Italian chefs</a> right before his eyes.</p>
<p>The link between the two? Both Mr. B and Mr. X received deep brain stimulation (<a href="https://doi.org/10.1038/507290a">DBS</a>), a procedure involving an implant that sends electric impulses to specific targets in the brain to alter neural activity. While brain implants aim to <a href="https://doi.org/10.1038/nature.2017.23031">treat neural dysfunction</a>, cases like these demonstrate that they may influence an individual’s perception of the world and behavior in undesired ways. </p>
<p>Mr. B received DBS as treatment for his severe obsessive compulsive disorder. He’d never been a music lover until, <a href="https://doi.org/10.3389/fnbeh.2014.00152">under DBS</a>, he developed a distinct and entirely new music preference for Johnny Cash. When the device was turned off, the preference disappeared. </p>
<p>Mr. X, an epilepsy patient, received DBS as part of an investigation to locate the origin of his seizures. During DBS, he hallucinated that doctors became chefs with aprons before the stimulation ended and the scene faded. </p>
<p>In both of these real-world cases, DBS clearly triggered the changed perception. And that introduces a host of thorny questions. As neurotechnologies like this become more common, the behaviors of people with DBS and other kinds of brain implants might challenge current societal views on responsibility.</p>
<p>Lawyers, philosophers and ethicists have labored to define the conditions under which individuals are to be judged legally and morally responsible for their actions. The brain is generally regarded as the center of control, rational thinking and emotion – it orchestrates people’s actions and behaviors. As such, the brain is key to agency, autonomy and responsibility. </p>
<p>Where does responsibility lie if a person acts under the influence of their brain implant? As <a href="http://www.bioethics.msu.edu/73-people/300-cabrera">a neuroethicist</a> and <a href="http://www.law.msu.edu/faculty_staff/profile.php?prof=723">a legal expert</a>, we suggest that society should start grappling with these questions now, before they must be decided in a court of law. </p>
<h2>Who’s to blame if something goes wrong?</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An uncontrollable urge to aim right for them?</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/fabiovenni/2065036619">Fabio Venni</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Imagine that Ms. Q was driving one day and had a sudden urge to swerve into a crowded bus stop. As a result, she ended up injuring several people and damaging the bus stop. During their investigation, police found that Ms. Q had a brain implant to treat her Parkinson’s disease. This implant malfunctioned at the time the urge occurred. Furthermore, Ms. Q claims that the bus stop was not there when she acted on the impulse to swerve.</p>
<p>As brain stimulating technology advances, a hypothetical case like Ms. Q’s raises questions about moral and legal responsibility. Is Ms. Q solely responsible for her actions? Can we attribute any blame to the device? What about to the engineers who designed it or the manufacturer? The neurosurgeon who implanted it or the neurologist who programmed the device parameters?</p>
<p>Historically, moral and legal responsibility have largely focused on the autonomous individual – that is, someone with the capacity to deliberate or act on the basis of one’s own desires and plans, free of distorting external forces. However, with modern technological advances, many hands may be involved in the operation of these brain implants, <a href="https://doi.org/10.1038/nature.2017.23031">including artificial intelligence programs directly influencing the brain</a>. </p>
<p>This external influence raises questions about the degree to which someone with an implant can control their actions and behaviors. If brain implants influence someone’s decisions and behaviors, do they undermine the person’s autonomy? If autonomy is undermined, can we attribute responsibility to the individual? </p>
<p>Society needs to discuss what happens when science and technology start challenging those long-held assumptions.</p>
<h2>So many shades of gray</h2>
<p>There are different legal distinctions concerning responsibility, such as causal responsibility and liability responsibility.</p>
<p>Using this distinction, one may say that the implant is causally responsible, but that Ms. Q still has liability for her actions. One might be tempted to split the liability in this way because Ms. Q still acted on the urge – especially if she knew the risk of brain implant side effects. Perhaps Ms. Q still bears all primary responsibility but the influence of the implant should mitigate some of her punishment.</p>
<p>These are important gradations to reckon with, because the way we as a society divide liability may force patients to choose between potential criminal liability and treating a debilitating brain condition.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Would the surgeon bear some responsibility? Or the device manufacturer?</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/File:Peds_DBS.jpg">Allurimd (talk)</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Questions also arise about product liability for companies, professional responsibility issues for researchers and technology developers, and medical malpractice for the health professionals who placed and programmed the device. Even if multiple actors share responsibility, the question regarding how to distribute responsibility among multiple actors still remains. </p>
<p>Adding an additional layer is the potential for malicious interference of these implants by criminals. Newer implants may have <a href="https://www.scientificamerican.com/article/wireless-brain-implant-allows-ldquo-locked-in-rdquo-woman-to-communicate/">wireless connectivity</a>. Hackers could attack such implants to use Ms. Q for their own (possibly nefarious) purposes, posing more challenges to questions of responsibility. </p>
<p>Insulin pumps and implantable cardiac defibrillators have already been hacked in real life. While there have not been any reports of malicious interference with brain implants, their increasing adoption brings greater opportunity for tech-savvy individuals <a href="https://doi.org/10.1016/j.wneu.2016.05.010">to potentially use the technology for evil</a>.</p>
<p>Considering the impact brain implants can have on moral and legal notions of responsibility, it’s time to discuss whether and when brain interventions should excuse people. New technologies often require some modification or extension of existing legal mechanisms. For example, assisted reproductive technologies have required society to <a href="https://www.uscis.gov/news/uscis-expands-definition-mother-and-parent-include-gestational-mothers-using-assisted-reproductive-technology-art">redefine what it means to be a “parent.”</a></p>
<p>It’s possible that soon we will start hearing in courtrooms: “It’s not my fault. My brain implant made me do it.”</p><img src="https://counter.theconversation.com/content/91040/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Where does responsibility lie if a person acts under the influence of their brain implant? As neurotechnologies advance, a neuroethicist and a legal expert write that now’s the time to hash it out.Laura Y. Cabrera, Assistant Professor of Neuroethics, Michigan State UniversityJennifer Carter-Johnson, Associate Professor of Law, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/827012017-08-30T08:22:47Z2017-08-30T08:22:47ZBrain stimulation can boost creativity – but could it also help you hear inspirational voices?<figure><img src="https://images.theconversation.com/files/183773/original/file-20170829-10418-l52jwz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Inspiration can come when we least expect it. </span> <span class="attribution"><span class="source">Alena Ozerova/Shutterstock</span></span></figcaption></figure><p>Steve Jobs, the late co-founder of Apple, once said “<a href="https://www.wired.com/1996/02/jobs-2/">creativity is just connecting things</a>”. There’s truth in that but there is another source of creativity, too – the ideas that simply pop into our minds. In ancient times, these were seen as <a href="https://en.wikipedia.org/wiki/Artistic_inspiration">gifts from the muses or gods</a>. Today, people sometimes describe such ideas as coming from an inner voice or even a character separate from themselves.</p>
<p>The creative ability to make connections between things is something neuroscience can improve using a brain stimulation technique called transcranial direct current stimulation (tDCS), which passes a weak electric current through the brain via electrodes on the head. But could the same technique also boost creativity by summoning inner voices?</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/183786/original/file-20170829-10414-os0cq1.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/183786/original/file-20170829-10414-os0cq1.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=924&fit=crop&dpr=1 600w, https://images.theconversation.com/files/183786/original/file-20170829-10414-os0cq1.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=924&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/183786/original/file-20170829-10414-os0cq1.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=924&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/183786/original/file-20170829-10414-os0cq1.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1161&fit=crop&dpr=1 754w, https://images.theconversation.com/files/183786/original/file-20170829-10414-os0cq1.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1161&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/183786/original/file-20170829-10414-os0cq1.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1161&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A demonstration of transcranial direct current stimulation.</span>
<span class="attribution"><span class="source">Air Force photo by Bill Hancock</span></span>
</figcaption>
</figure>
<p>When I experienced tDCS in our research laboratory I merely felt a slight warmth and itch on my scalp. The technique is <a href="http://www.sciencedirect.com/science/article/pii/S2467981X16300233">considered safe</a> and <a href="http://www.sciencedirect.com/science/article/pii/S0361923007000111">adverse effects are relatively minor</a>. Of course, it is not something to be attempted at home.</p>
<p>It works by temporarily increasing the activity of the part of the brain under the positive electrode, decreasing it under the negative electrode, and altering connectivity within the brain. It has been used for a range of purposes, from boosting performance in <a href="http://www.sciencedirect.com/science/article/pii/S1053811912011743">Air Force personnel</a> to <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4799388/">treating psychiatric disorders</a>.</p>
<p>Researchers have also discovered it can increase creativity. A <a href="https://academic.oup.com/cercor/article/27/4/2628/3056344/Thinking-Cap-Plus-Thinking-Zap-tDCS-of-Frontopolar">recent study</a> found it allowed people to make more “outside of the box” connections. This study placed the positive electrode over the left frontopolar cortex, which is involved in processes including <a href="http://www.pnas.org/content/112/9/E1020.full">multitasking, reasoning and memory</a>. Participants who experienced tDCS were able to make more creative analogies.</p>
<p>But what about the experience of ideas that just pop up? Waiting for ideas to come is unnerving, as we come to realise we have little or no control over this process. As the stand-up comedian <a href="https://www.theguardian.com/culture/2015/jan/04/stewart-lee-i-dont-know-where-the-ideas-come-from">Stewart Lee puts it</a>:</p>
<blockquote>
<p>I don’t know where the ideas come from, and it’s terrifying. They seem to be absolute flukes … I’m just hoping that some sort of event will descend on me.</p>
</blockquote>
<p>Many writers have information descend on them from their characters, who can be experienced as <a href="http://journals.sagepub.com/doi/abs/10.2190/FTG3-Q9T0-7U26-5Q5X?journalCode=icaa">autonomous entities</a> that communicate with them. Writer <a href="http://www.bfi.org.uk/films-tv-people/4ce2b8bd16267">Hilary Mantel describes</a> the creation of her story, The Giant, O'Brien as “it being listened to by me”. She asks:</p>
<blockquote>
<p>How can I seem to produce a character who acts in a way that is independent from me and foreign to me? Where did these thoughts come from?</p>
</blockquote>
<p>Author <a href="https://dianerehm.org/shows/2016-12-29/j-k-rowling-rebroadcast">JK Rowling reports</a> some of her characters come through a “mysterious process no one really understands”, just popping up.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/183779/original/file-20170829-12462-1j4dejw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/183779/original/file-20170829-12462-1j4dejw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/183779/original/file-20170829-12462-1j4dejw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/183779/original/file-20170829-12462-1j4dejw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/183779/original/file-20170829-12462-1j4dejw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/183779/original/file-20170829-12462-1j4dejw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/183779/original/file-20170829-12462-1j4dejw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">JK Rowling says some of her characters ‘just pop up’.</span>
<span class="attribution"><span class="source">s_bukley/Shutterstock</span></span>
</figcaption>
</figure>
<p>Could we use neurostimulation to facilitate this mysterious process? Could we even summon an artificial muse? To answer this, we need to consider people who already have them.</p>
<h2>The science of inner voices</h2>
<p>A common way to experience others in our heads is through “hearing voices”. <a href="http://journals.sagepub.com/doi/abs/10.2190/74V5-HNXN-JEY5-DG7W?journalCode=icaa">Most of us</a> have had fleeting such experiences, like hearing our name called when no one is there. <a href="http://www.jkp.com/uk/can-t-you-hear-them-34840.html">Around 2-3% of the population</a> have more extended voice-hearing experiences. </p>
<p>If the voice is nasty <a href="https://www.ncbi.nlm.nih.gov/pubmed/21450152">this can cause problems</a> and lead the person to seek help. If, however, the voice is friendly or benign, and the person <a href="https://www.ncbi.nlm.nih.gov/pubmed/28053132">has some control over it</a>, they may <a href="https://www.ncbi.nlm.nih.gov/pubmed/27866082">never need or seek help</a>. </p>
<p>Some voices are <a href="https://www.ncbi.nlm.nih.gov/pubmed/23267192">simply gibberish</a>. Others say the same kind of thing over and over, like a stuck record. Some are like memories, recapitulating the past. But others have more creative potential.</p>
<p>The French mathematician Françoise Chatelin <a href="http://www.bfi.org.uk/films-tv-people/4ce2b8bd16267">described how</a> hearing voices helped her “open a new door on the way to perceive numbers”. The English psychologist <a href="https://www.ted.com/talks/eleanor_longden_the_voices_in_my_head">Eleanor Longden revealed how</a>, when she was a student, voices told her answers during exams. Another English voice-hearer, Peter Bullimore, wrote a book using ideas and characters his voices gave him and says he “<a href="https://mentalhealthrecovery.omeka.net/exhibits/show/peter-bullimore/hearing-voices/a-village-called-pumpkin">couldn’t have done it without them</a>”. </p>
<p>Research shows tDCS can <a href="https://www.ncbi.nlm.nih.gov/pubmed/25798123">reduce voice-hearing</a> in people diagnosed with schizophrenia, which is <a href="http://journals.sagepub.com/doi/full/10.1177/1049732315581602">what some want</a>. Such studies typically increase the activity of the left prefrontal cortex, involved in planning and controlling our thoughts and actions, reduce activity in the left temporoparietal junction, involved in <a href="https://www.ncbi.nlm.nih.gov/pubmed/16460715">communicating with others</a>, and <a href="https://www.ncbi.nlm.nih.gov/pubmed/26303936">alter the connectivity between the frontal and temporal lobes of the brain</a>.</p>
<p>So, what would happen if we performed this <em>in reverse</em> – in people who don’t hear voices? A recent study, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4179889/">published in the journal Neuropsychologia</a>, did something similar to this with healthy volunteers and found it caused them to be more likely to hallucinate words in white noise. <a href="http://www.nature.com/nature/journal/v443/n7109/full/443287a.html?foxtrotcallback=true">Other studies</a> have found neurostimulation of the left temporoparietal junction causes the feeling that an unseen person is near.</p>
<p>We are clearly a long way from an electric muse. Yet such research places it on the horizon. We would also need a cultural shift for this idea to be adopted – a move from viewing voice-hearing as <a href="http://journals.sagepub.com/doi/abs/10.1177/0020764014535757">necessarily a sign of pathology</a> to one which accepts it can sometimes be <a href="http://www.pccs-books.co.uk/products/living-with-voices-50-stories-of-recovery#.WZgPnT6GOpo">helpful, creative and desirable</a>.</p>
<h2>Beyond neuroscience</h2>
<p>Of course, other approaches can also push our brain to speak to us. “Sensory deprivation” – blocking a specific sense through, for example, blindfolds or earmuffs – has <a href="https://link.springer.com/article/10.1007/s11097-011-9233-z">some limited ability</a> to summon voices. Absorptive practices <a href="http://www.penguinrandomhouse.com/books/104442/when-god-talks-back-by-t-m-luhrmann/9780307277275/">such as prayer or meditation</a> can also cause voice-hearing. Indeed, practitioners of <a href="https://www.vice.com/en_us/article/exmqzz/tulpamancy-internet-subculture-892">Tulpamancy</a> claim to conjure up seemingly <a href="http://somatosphere.net/2015/04/varieties-of-tulpa-experiences-sentient-imaginary-friends-embodied-joint-attention-and-hypnotic-sociality-in-a-wired-world.html">sentient entities</a> through meditation.</p>
<p>A simpler, although obviously illegal, route are psychedelic drugs such as <a href="http://www.simonandschuster.com/books/DMT-The-Spirit-Molecule/Rick-Strassman/9780892819270">DMT</a> and <a href="http://psycnet.apa.org/record/2005-09713-005">psilocybin</a>. As Terence McKenna once said of psilocybin, “<a href="https://www.youtube.com/watch?v=KfgRWZx7Q00">there is a mind there waiting</a>”. Unfortunately, there are few formal studies of <a href="http://www.ingentaconnect.com/content/ben/cdar/2014/00000007/00000002/art00006">what these encounters are like</a> and <a href="https://academic.oup.com/schizophreniabulletin/article/39/6/1343/1883760/Functional-Connectivity-Measures-After-Psilocybin">how the brain creates them</a>. Such research could tell us much about what our brains are capable of, and how.</p>
<p>Even if it were feasible to use neurostimulation to conjure ideas via voices, would it be ethical? Lacking control of voices and not liking what they said could <a href="https://www.ncbi.nlm.nih.gov/pubmed/21450152">lead to distress and problems functioning</a>. Voices could also be dangerously deified rather than critically considered, as the unseen are often <a href="https://www.ncbi.nlm.nih.gov/pubmed/22115329">mistaken for the unerring</a>.</p>
<p>Also, what would we have created – philosophically speaking? Could it exhibit intelligent human behaviour? Would it display self-conscious emotions, or even be conscious? This would be what many writers strive for. Indeed, Hilary Mantel describes the writing process as “<a href="http://www.bfi.org.uk/films-tv-people/4ce2b8bd16267">allowing a new consciousness to emerge</a>”. </p>
<p>New understandings of the brain will, eventually, help us tap our inner wells for inspiration. This process may even shed light on how consciousness arises. As inventor Thomas Edison noted, though, it will only be through a lot of perspiration that such inspiration gets us anywhere.</p><img src="https://counter.theconversation.com/content/82701/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dr McCarthy-Jones receives research funding from the Irish Research Council and the US-based Brain & Behavior Research Foundation.</span></em></p>Many writers say they have inspiration come to them from their characters or an inner voice. Science is seeking answers.Simon McCarthy-Jones, Associate Professor in Clinical Psychology and Neuropsychology, Trinity College DublinLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/777592017-06-14T02:23:03Z2017-06-14T02:23:03ZHelping or hacking? Engineers and ethicists must work together on brain-computer interface technology<figure><img src="https://images.theconversation.com/files/173203/original/file-20170609-4841-73vkw2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A subject plays a computer game as part of a neural security experiment at the University of Washington.</span> <span class="attribution"><span class="source">Patrick Bennett</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>In the 1995 film <a href="http://www.imdb.com/title/tt0112462/">“Batman Forever</a>,” the Riddler used 3-D television to secretly access viewers’ most personal thoughts in his hunt for Batman’s true identity. By 2011, the metrics company <a href="http://www.nielsen.com/us/en/press-room/2011/nielsen-acquires-neurofocus.html">Nielsen had acquired Neurofocus</a> and had created a “consumer neuroscience” division that uses <a href="http://www.nielsen.com/us/en/solutions/capabilities/consumer-neuroscience.html">integrated conscious and unconscious data</a> to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.</p>
<p>Recent announcements <a href="https://www.theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-computer-interface-ai-cyborgs">by Elon Musk</a> <a href="https://techcrunch.com/2017/04/19/facebook-brain-interface/">and Facebook</a> about <a href="https://theconversation.com/melding-mind-and-machine-how-close-are-we-75589">brain-computer interface (BCI) technology</a> are just the latest headlines in an ongoing science-fiction-becomes-reality story.</p>
<p>BCIs use brain signals to control objects in the outside world. They’re a potentially world-changing innovation – imagine being paralyzed but able to “reach” for something with a prosthetic arm <a href="http://www.slate.com/blogs/future_tense/2012/12/21/jan_scheuermann_footage_of_paralyzed_woman_eating_chocolate_with_robotic.html">just by thinking about it</a>. But the revolutionary technology also raises concerns. Here at the University of Washington’s Center for Sensorimotor Neural Engineering (<a href="http://www.csne-erc.org/">CSNE</a>) we and our colleagues are researching BCI technology – and a crucial part of that includes working on issues such as neuroethics and neural security. Ethicists and engineers are working together to understand and quantify risks and develop ways to protect the public now. </p>
<h2>Picking up on P300 signals</h2>
<p>All BCI technology relies on being able to collect information from a brain that a device can then use or act on in some way. There are numerous places from which signals can be recorded, as well as infinite ways the data can be analyzed, so there are many possibilities for how a BCI can be used.</p>
<p>Some BCI researchers zero in on one particular kind of regularly occurring brain signal that alerts us to important changes in our environment. Neuroscientists call these signals “<a href="https://doi.org/10.4103/0972-6748.57865">event-related potentials</a>.” In the lab, they help us identify a reaction to a stimulus.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=417&fit=crop&dpr=1 600w, https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=417&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=417&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=524&fit=crop&dpr=1 754w, https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=524&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/172819/original/file-20170607-29557-1ggtcor.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=524&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Examples of event-related potentials (ERPs), electrical signals produced by the brain in response to a stimulus.</span>
<span class="attribution"><span class="source">Tamara Bonaci</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>In particular, we capitalize on one of these specific signals, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2715154/">called the P300</a>. It’s a positive peak of electricity that occurs toward the back of the head about 300 milliseconds after the stimulus is shown. The P300 alerts the rest of your brain to an “oddball” that stands out from the rest of what’s around you.</p>
<p>For example, you don’t stop and stare at each person’s face when you’re searching for your friend at the park. Instead, if we were recording your brain signals as you scanned the crowd, there would be a detectable P300 response when you saw someone who could be your friend. The P300 carries an unconscious message alerting you to something important that deserves attention. These signals are part of a still unknown brain pathway that aids in detection and focusing attention.</p>
<h2>Reading your mind using P300s</h2>
<p>P300s reliably occur any time you notice something rare or disjointed, like when you find the shirt you were looking for in your closet or your car in a parking lot. Researchers can use the P300 in an experimental setting to determine what is important or relevant to you. That’s led to the creation of devices like spellers that allow paralyzed individuals to type using their thoughts, <a href="https://doi.org/10.1016/0013-4694(88)90149-6">one character at a time</a>.</p>
<p>It also can be used to determine what you know, in what’s called a “<a href="https://dx.doi.org/10.3109/00207458808985770">guilty knowledge test</a>.” In the lab, subjects are asked to choose an item to “steal” or hide, and are then shown many images repeatedly of both unrelated and related items. For instance, subjects choose between a watch and a necklace, and are then shown typical items from a jewelry box; a P300 appears when the subject is presented with the image of the item he took.</p>
<p>Everyone’s P300 is unique. In order to know what they’re looking for, researchers need “training” data. These are previously obtained brain signal recordings that researchers are confident contain P300s; they’re then used to calibrate the system. Since the test measures an unconscious neural signal that you don’t even know you have, can you fool it? Maybe, if you <a href="https://doi.org/10.1111/j.1469-8986.2004.00158.x">know that you’re being probed and what the stimuli are</a>.</p>
<p>Techniques like these are still considered unreliable and unproven, and thus U.S. courts have <a href="https://doi.org/10.1176/ps.2007.58.4.460">resisted admitting P300 data as evidence</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/172821/original/file-20170607-25764-pbljrg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">For now, most BCI technology relies on somewhat cumbersome EEG hardware that is definitely not stealth.</span>
<span class="attribution"><span class="source">Mark Stone, University of Washington</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Imagine that instead of using a P300 signal to solve the mystery of a “stolen” item in the lab, someone used this technology to extract information about what month you were born or which bank you use – without your telling them. Our research group has <a href="https://digital.lib.washington.edu/researchworks/handle/1773/33808">collected data suggesting this is possible</a>. Just using an individual’s brain activity – specifically, their P300 response – we could determine a subject’s preferences for things like favorite coffee brand or favorite sports.</p>
<p>But we could do it only when subject-specific training data were available. What if we could figure out someone’s preferences without previous knowledge of their brain signal patterns? Without the need for training, users could simply put on a device and go, skipping the step of loading a personal training profile or spending time in calibration. Research on trained and untrained devices is the subject of <a href="http://brl.ee.washington.edu/neural-engineering/bci-security/">continuing experiments at the University of Washington</a> <a href="https://perso.uclouvain.be/fstandae/PUBLIS/190.pdf">and elsewhere</a>. </p>
<p>It’s when the technology is able to “read” someone’s mind who isn’t actively cooperating that ethical issues become particularly pressing. After all, we willingly trade bits of our privacy all the time – when we open our mouths to have conversations or use GPS devices that allow companies to collect data about us. But in these cases we consent to sharing what’s in our minds. The difference with next-generation P300 technology under development is that the protection consent gives us may get bypassed altogether.</p>
<p>What if it’s possible to decode what you’re thinking or planning without you even knowing? Will you feel violated? Will you feel a loss of control? Privacy implications may be wide-ranging. Maybe advertisers could know your preferred brands and send you personalized ads – which may be convenient or creepy. Or maybe malicious entities could determine where you bank and your account’s PIN – which would be alarming. </p>
<h2>With great power comes great responsibility</h2>
<p>The potential ability to determine individuals’ preferences and personal information using their own brain signals has spawned a number of difficult but pressing questions: Should we be able to keep our neural signals private? That is, should neural security <a href="https://doi.org/10.1186/s40504-017-0050-1">be a human right</a>? How do we <a href="https://dx.doi.org/10.2139/ssrn.2427564">adequately protect and store all the neural data</a> being recorded for research, and soon for leisure? How do consumers know if any protective or anonymization measures are being made with their neural data? As of now, neural data collected for commercial uses are not subject to the same legal protections covering <a href="https://www.hhs.gov/hipaa/index.html">biomedical research or health care</a>. Should neural data be treated differently?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/172822/original/file-20170607-25764-qhx5o4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Neuroethicists from the UW Philosophy department discuss issues related to neural implants.</span>
<span class="attribution"><span class="source">Mark Stone, University of Washington</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>These are the kinds of conundrums that are best addressed by neural engineers and ethicists working together. Putting ethicists in labs alongside engineers – <a href="http://www.csne-erc.org/research/neuroethics">as we have done at the CSNE</a> – is one way to ensure that privacy and security risks of neurotechnology, as well as other ethically important issues, are an active part of the research process instead of an afterthought. For instance, Tim Brown, an ethicist at the CSNE, is “housed” within a neural engineering research lab, allowing him to have daily conversations with researchers about ethical concerns. He’s also easily able to interact with – and, in fact, interview – research subjects about their <a href="http://www.csne-erc.org/engage-enable/post/ethics-cornerstone-neural-engineering-research">ethical concerns about brain research</a>. </p>
<p>There are important ethical and legal lessons to be drawn about technology and privacy from other areas, such as <a href="https://www.genome.gov/27561246/privacy-in-genomics">genetics</a> and <a href="http://www.theneuroethicsblog.com/2011/08/ethical-dimenstions-of-neuromarketing.html">neuromarketing</a>. But there seems to be something important and different about reading neural data. They’re more intimately connected to the mind and who we take ourselves to be. As such, ethical issues raised by BCI demand special attention.</p>
<h2>Working on ethics while tech’s in its infancy</h2>
<p>As we wrestle with how to address these privacy and security issues, there are two features of current P300 technology that will buy us time.</p>
<p>First, most commercial devices available use dry electrodes, which rely solely on skin contact to conduct electrical signals. This technology is prone to a low signal-to-noise ratio, meaning that we can extract only relatively basic forms of information from users. The brain signals we record are known to be highly variable (even for the same person) due to things like electrode movement and the constantly changing nature of brain signals themselves. Second, electrodes are not always in ideal locations to record.</p>
<p>All together, this inherent lack of reliability means that BCI devices are not nearly as ubiquitous today as they may be in the future. As electrode hardware and signal processing continue to improve, it will be easier to continuously use devices like these, and make it easier to extract personal information from an unknowing individual as well. The safest advice would be to not use these devices at all.</p>
<p>The goal should be that the ethical standards and the technology will mature together to ensure future BCI users are confident their privacy is being protected as they use these kinds of devices. It’s a rare opportunity for scientists, engineers, ethicists and eventually regulators to work together to create even better products than were originally dreamed of in science fiction.</p><img src="https://counter.theconversation.com/content/77759/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Eran Klein a member of the Center for Sensorimotor Neural Engineering (CSNE) at the University of Washington which receives funding from the National Science Foundation (NSF).</span></em></p><p class="fine-print"><em><span>Katherine Pratt works for the Electrical Engineering department at the University of Washington in Seattle, and is affiliated with the Center for Sensorimotor Neural Engineering (CSNE). Katherine Pratt receives funding from the National Science Foundation and Technology Policy Lab, and has also previously received support from Google. The CSNE partners with the companies listed at <a href="http://csne-erc.org/content/current-members">http://csne-erc.org/content/current-members</a></span></em></p>BCI devices that read minds and act on intentions can change lives for the better. But they could also be put to nefarious use in the not-too-distant future. Now’s the time to think about risks.Eran Klein, Adjunct Assistant Professor of Neurology at Oregon Health and Sciences University and Affiliate Assistant Professor of Philosophy, University of WashingtonKatherine Pratt, Ph.D. Student in Electrical Engineering, University of WashingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/690972016-12-01T01:55:26Z2016-12-01T01:55:26ZNeuroscience hasn’t been weaponized – it’s been a tool of war from the start<figure><img src="https://images.theconversation.com/files/148172/original/image-20161130-17791-94aqza.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A discipline neither good nor evil.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Turning_the_Mind_Inside_Out_Saturday_Evening_Post_24_May_1941_a_detail_1.jpg">Saturday Evening Post/Harris A. Ewing</a></span></figcaption></figure><p>What could once only be imagined in science fiction is now increasingly coming to fruition: <a href="http://www.independent.co.uk/news/science/drones-brain-thoughts-controlled-bci-brain-computer-interface-brain-controlled-interface-a6996781.html">Drones can be flown by human brains’ thoughts</a>. Pharmaceuticals can <a href="http://www.theatlantic.com/health/archive/2014/08/changing-memories-to-treat-ptsd/379223/">help soldiers forget traumatic experiences</a> or produce feelings of trust to encourage <a href="http://www.usnews.com/news/articles/2012/05/15/oxytocin-the-trust-hormone-could-become-new-interrogation-tool">confession in interrogation</a>. DARPA-funded research is working on everything from <a href="http://www.darpa.mil/news-events/2015-01-19">implanting brain chips</a> to “<a href="https://swarmlab.eecs.berkeley.edu/projects/4887/neural-dust-ultrasonic-low-power-solution-chronic-brain-machine-interfaces">neural dust</a>” in an effort to alleviate the effects of traumatic experience in war. Invisible microwave beams produced by military contractors and <a href="https://www.aclu.org/blog/speakeasy/dont-let-militarys-deadly-pain-ray-machine-invade-la-county-jail">tested on U.S. prisoners</a> can produce the sensation of burning at a distance.</p>
<p>What all these techniques and technologies have in common is that they’re recent neuroscientific breakthroughs propelled by military research within a broader context of rapid neuroscientific development, driven by massive government-funded projects in both <a href="https://www.braininitiative.nih.gov/">America</a> and the <a href="https://www.humanbrainproject.eu/">European Union</a>. Even while much about the brain <a href="http://www.nytimes.com/2014/11/11/science/learning-how-little-we-know-about-the-brain.html">remains mysterious</a>, this research has contributed to the rapid and startling development of neuroscientific technology.</p>
<p>And while we might marvel at these developments, it is also undeniably true that this state of affairs raises significant ethical questions. What is the proper role – if any – of neuroscience in national defense or war efforts? <a href="https://newark-rutgers.academia.edu/AlisonHowell">My research</a> addresses these questions in the broader context of looking at how international relations, and specifically warfare, are shaped by scientific and medical expertise and technology.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/_FqcbFHFisQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">An Air Force video about military research on the human brain.</span></figcaption>
</figure>
<h2>Weaponization of a peaceable science?</h2>
<p>To understand the relationship between science and war, academic <a href="http://blpress.org/books/mind-wars/">bioethicists</a>, <a href="https://www.wired.com/2012/02/neuroscience-war/">journalists</a> and <a href="https://royalsociety.org/%7E/media/Royal_Society_Content/policy/projects/brain-waves/2012-02-06-BW3.pdf">policy advisors</a> alike typically rely on the framework of “dual use.” Starting from the assumption that the purpose of science is to improve human life, this perspective nevertheless admits that many technologies used in peacetime or to help enhance human capacities can also be harnessed to a second use: harming and degrading human capacities as part of a military arsenal. This framework calls attention to the potential misappropriation of sciences and technologies. By acknowledging potential misuses, it aims to help guide policy to limit such possibilities through practical tools such as weapons conventions.</p>
<p>Key to this framework is the concept of “weaponization.” The dual use idea assumes that we should be concerned with how a once “peaceful” science or technology came to be developed and used in war or national security applications. This process is termed the “<a href="http://doi.org/10.1007/978-94-007-4707-4_144">weaponization of neuroscience</a>.” </p>
<p>The dual use framework and the weaponization concept may offer some immediate potential practical utility. But, <a href="http://doi.org/10.1177/0305829816672930">as I have written more extensively elsewhere</a>, they’re based on a massively misguided notion both of the history of neuroscience and of what is at stake practically and politically.</p>
<h2>Neuroscience’s roots are both civilian and military</h2>
<p>The dual use framework and weaponization concept assume stark war/peace and military/civilian divides. But in fact, the discipline of neuroscience grew equally and simultaneously out of institutions we typically consider civilian and military.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/148175/original/image-20161130-17786-1hbjf91.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/148175/original/image-20161130-17786-1hbjf91.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/148175/original/image-20161130-17786-1hbjf91.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=431&fit=crop&dpr=1 600w, https://images.theconversation.com/files/148175/original/image-20161130-17786-1hbjf91.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=431&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/148175/original/image-20161130-17786-1hbjf91.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=431&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/148175/original/image-20161130-17786-1hbjf91.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=542&fit=crop&dpr=1 754w, https://images.theconversation.com/files/148175/original/image-20161130-17786-1hbjf91.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=542&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/148175/original/image-20161130-17786-1hbjf91.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=542&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Walter Reed Army Institute of Research building, site of much early neuroscience work.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/medicalmuseum/4424656595">National Museum of Health and Medicine</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p><a href="http://doi.org/10.1146/annurev.neuro.23.1.343">Modern neuroscience</a> was established in the post-WWII period. Like many disciplines developed and funded in that era (such as <a href="http://www.hup.harvard.edu/catalog.php?isbn=9780674736825">physics</a>, <a href="http://www.randomhousebooks.com/books/188642/">nuclear medicine</a> and others), the discipline was established through military funding in both “civilian” institutions such as MIT and Harvard and military research institutes such as the <a href="http://wrair-www.army.mil/">Walter Reed Army Institute of Research</a>. That Institute’s Department of Neuropsychiatry <a href="https://www.washingtonpost.com/archive/local/1985/09/13/psychiatrist-david-rioch-dies-at-85/98593033-39cf-47b5-af14-2d25013b73c9/">originated the idea</a> that researchers should study brain anatomy and physiology at the same time as psychology or psychiatry. Neuroscience was <a href="http://dx.doi.org/10.1371/journal.pbio.1001289">funded</a> and <a href="https://www.youtube.com/watch?v=rM6NERF5wP8">shaped</a> to meet the needs of warfare and national security imperatives.</p>
<p>This state of affairs was nothing new: Modern warfare and medical and scientific innovation have <a href="https://doi.org/10.1017/S0260210514000369">long been symbiotic</a>, including the “invention” of <a href="http://dx.doi.org/10.1093/brain/awp339">American clinical neurology</a> through the American Civil War. It’s not possible to say that neuroscience has been “weaponized,” because this presumes a naturally peaceful and nonmilitary origin story that is simply historically inaccurate.</p>
<h2>Simultaneously used for good and ill</h2>
<p>Also, the dual use framework and the concept of weaponization assume a distinct divide between help and harm. People using these concepts are primarily concerned with harmful applications of neuroscience – those that degrade human capacities. Without a doubt, these are of deep concern. Few would deny that we should pay close attention, for instance, to the <a href="http://www.bbc.com/news/world-europe-20067384">use of neuropharmaceuticals</a> to degrade the combat capabilities of enemies or produce interrogation susceptibility, or related developments.</p>
<p>But the stark divide between help and harm elides the fact that many technologies can do both simultaneously.</p>
<p>One example is the current DARPA-funded development of brain-machine interfaces. These technologies seek to connect the brain directly to machine technologies in order to control them remotely. Of course this may be a boon for veterans and soldiers in need of <a href="http://www.darpa.mil/news-events/2013-05-30">better prosthetic devices</a>. But these are the very same technologies (and sometimes the <a href="https://www.washingtonpost.com/news/speaking-of-science/wp/2015/03/03/a-paralyzed-woman-flew-a-f-35-fighter-jet-in-a-simulator-using-only-her-mind/">same experimental subjects</a>) that are being used to pilot drones for potential use in warfare.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/148142/original/image-20161130-17047-1sm1a8m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/148142/original/image-20161130-17047-1sm1a8m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/148142/original/image-20161130-17047-1sm1a8m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/148142/original/image-20161130-17047-1sm1a8m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/148142/original/image-20161130-17047-1sm1a8m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/148142/original/image-20161130-17047-1sm1a8m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/148142/original/image-20161130-17047-1sm1a8m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/148142/original/image-20161130-17047-1sm1a8m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">‘Virtual Iraq’ exposure therapy can help veterans – and prepare them to return to the battlefield.</span>
<span class="attribution"><a class="source" href="http://archive.defense.gov/homepagephotos/leadphotoimage.aspx?id=10463">Defense Dept. photo by John J. Kruzel</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>By way of a second example, consider military medical and rehabilitative practices. These are assumed to be on the “help” rather than “harm” side of the split. Think, for instance, of <a href="http://doi.org/10.1177/0304375412450842">increasing diagnosis of (mild) traumatic brain injuries in military settings</a>. Treatment of these injuries may do great good in the clinical setting for individuals who receive this care. But these therapies are also part of a system of military medicine aimed at producing war readiness and potential redeployment of soldiers. The good health of soldiers (help) is integral to warfare (harm), suggesting that the help/harm divide is not so stark as the dual use framework assumes.</p>
<p>For all these reasons, it’s not possible to say that neuroscience has been “<a href="http://thebulletin.org/militarization-neuroscience">militarized</a>” or “weaponized.” The dual use framework ignores how embedded neuroscience has always been with war and national defense. In doing so, it leads us to underestimate the political task at hand, both in relation to war and in relation to science. On the side of war, it elides the ethical questions we need to be asking, not only about weaponization, but also about the supposedly benign practices of diagnosis, cure and enhancement. On the side of science, it obscures questions about what research gets funded and praised, and about the opportunity costs of allowing military imperatives to drive scientific inquiry.</p><img src="https://counter.theconversation.com/content/69097/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alison Howell does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Maybe you think neuroscience has a peaceable history of benign efforts to improve lives and enhance human capacities. But its origins and development tell a different story – with ethical implications.Alison Howell, Assistant Professor of International Relations, Rutgers University - NewarkLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/652152016-09-14T10:05:33Z2016-09-14T10:05:33ZConsidering ethics now before radically new brain technologies get away from us<figure><img src="https://images.theconversation.com/files/137669/original/image-20160913-4955-1hxmw14.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Now's the time to think about what we're getting into with neurotechnologies.</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/pic.mhtml?id=133182821">Brain image via www.shutterstock.com.</a></span></figcaption></figure><p>Imagine infusing thousands of wireless devices into your brain, and using them to both monitor its activity and directly influence its actions. It sounds like the stuff of science fiction, and for the moment it still is – but possibly not for long.</p>
<p>Brain research is on a roll at the moment. And as it converges with advances in science and technology more broadly, it’s transforming what we are likely to be able to achieve in the near future. </p>
<p>Spurring the field on is the promise of more effective treatments for debilitating neurological and psychological disorders such as <a href="http://www.ninds.nih.gov/disorders/epilepsy/epilepsy.htm">epilepsy</a>, <a href="http://www.ninds.nih.gov/disorders/parkinsons_disease/parkinsons_disease.htm">Parkinson’s disease</a> and <a href="https://www.nimh.nih.gov/health/topics/depression/index.shtml">depression</a>. But new brain technologies will increasingly have the potential to alter how someone thinks, feels, behaves and even perceives themselves and others around them – and not necessarily in ways that are within their control or with their consent.</p>
<p>This is where things begin to get ethically uncomfortable.</p>
<p>Because of concerns like these, the U.S. National Academies of Sciences, Engineering and Medicine (NAS) are <a href="http://www.nationalacademies.org/hmd/Activities/Research/NeuroForum/2016-SEP-15.aspx">cohosting a meeting of experts this week</a> on responsible innovation in brain science.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/oO0zy30n_jQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Berkeley’s ‘neural dust’ sensors are one of the latest neurotech advances.</span></figcaption>
</figure>
<h2>Where are neurotechnologies now?</h2>
<p>Brain research is intimately entwined with advances in the “neurotechnologies” that not only help us study the brain’s inner workings, but also transform the ways we can interact with and influence it.</p>
<p>For example, researchers at the University of California Berkeley recently <a href="http://news.berkeley.edu/2016/08/03/sprinkling-of-neural-dust-opens-door-to-electroceuticals/">published the first in-animal trials of what they called “neural dust”</a> – implanted millimeter-sized sensors. They inserted the sensors in <a href="http://dx.doi.org/10.1016/j.neuron.2016.06.034">the nerves and muscles of rats</a>, showing that these miniature wirelessly powered and connected sensors can monitor neural activity. The long-term aim, though, is to introduce thousands of neural dust particles <a href="http://arxiv.org/abs/1307.2196">into human brains</a>.</p>
<p>The UC Berkeley sensors are still relatively large, on par with a coarse piece of sand, and just report on what’s happening around them. Yet advances in nanoscale fabrication are likely to enable their further miniaturization. (The researchers estimate they could be made <a href="https://arxiv.org/abs/1307.2196">thinner than a human hair</a>.) And in the future, combining them with technologies like <a href="http://www.scientificamerican.com/article/optogenetics-controlling/">optogenetics</a> – using light to stimulate genetically modified neurons – could enable wireless, localized brain interrogation and control.</p>
<p>Used in this way, future generations of neural dust could transform how chronic neurological disorders are managed. They could also enable hardwired brain-computer interfaces (the <a href="https://arxiv.org/abs/1307.2196">original motivation behind this research</a>), or even be used to enhance cognitive ability and modify behavior.</p>
<p>In 2013, President Obama launched the multi-year, multi-million dollar <a href="https://www.whitehouse.gov/BRAIN">U.S. BRAIN Initiative</a> (Brain Research through Advancing Innovative Neurotechnologies). The same year, the European Commission launched the <a href="https://www.humanbrainproject.eu/">Human Brain Project</a>, focusing on advancing brain research, cognitive neuroscience and brain-inspired computing. There are also active brain research initiatives in <a href="https://www.sfn.org/news-and-calendar/neuroscience-quarterly/spring-2016/china-qa">China</a>, <a href="http://rstb.royalsocietypublishing.org/content/370/1668/20140310">Japan</a>, <a href="http://english.yonhapnews.co.kr/business/2016/05/30/0504000000AEN20160530008200320.html">Korea</a>, <a href="http://www.labman.org/">Latin America</a>, <a href="http://israelbrain.org/">Israel</a>, <a href="http://bluebrain.epfl.ch/">Switzerland</a>, <a href="http://www.braincanada.ca/">Canada</a> and even <a href="http://www.ncbi.nlm.nih.gov/pubmed/21870466">Cuba</a>.</p>
<p>Together, these represent an emerging and globally coordinated effort to not only better understand how the brain works, but to find new ways of controlling and enhancing it (in particular in disease treatment and prevention); to interface with it; and to build computers and other artificial systems that are inspired by it.</p>
<h2>Cutting-edge tech comes with ethical questions</h2>
<p>This week’s <a href="http://www.nationalacademies.org/hmd/Activities/Research/NeuroForum/2016-SEP-15.aspx">NAS workshop</a> – organized by the <a href="https://www.innovationpolicyplatform.org/project-emerging-technologies-and-brain-oecd-bnct">Organization for Economic Cooperation and Development</a> and supported by the National Science Foundation and my home institution of Arizona State University – isn’t the first gathering of experts to discuss the ethics of brain technologies. In fact there’s already an active international community of experts addressing “<a href="https://en.wikipedia.org/wiki/Neuroethics">neuroethics</a>.”</p>
<p>Many of these scientific initiatives do have a prominent ethics component. The U.S. BRAIN initiative for example includes a <a href="https://www.braininitiative.nih.gov/about/newg.htm">Neuroethics Workgroup</a>, while the E.C. Human Brain Project is using an <a href="https://www.humanbrainproject.eu/2016-ethics">Ethics Map</a> to guide research and development. These and others are grappling with the formidable challenges of developing future neurotechnologies responsibly.</p>
<p>It’s against this backdrop that the NAS workshop sets out to better understand the social and ethical opportunities and challenges emerging from global brain research and neurotechnologies. A goal is to identify ways of ensuring these technologies are developed in ways that are responsive to social needs, desires and concerns. And it comes at a time when brain research is beginning to open up radical new possibilities that were far beyond our grasp just a few years ago.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=540&fit=crop&dpr=1 600w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=540&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=540&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=679&fit=crop&dpr=1 754w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=679&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/137650/original/image-20160913-4936-dt595m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=679&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Transcranial magnetic stimulation uses a powerful and rapidly changing electrical current to excite neural processes in the brain, similar to direct stimulation with electrodes.</span>
<span class="attribution"><span class="source">Eric Wassermann, M.D.</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>In 2010, for instance, researchers at MIT demonstrated that Transcranial Magnetic Stimulation, or TMS – a noninvasive neurotechnology – <a href="http://news.mit.edu/2010/moral-control-0330">could temporarily alter someone’s moral judgment</a>. Another noninvasive technique called <a href="https://www.wired.com/2014/01/read-zapping-brain/">transcranial Direct Current Stimulation</a> (tDCS) delivers low-level electrical currents to the brain via electrodes on the scalp; it’s being explored as a <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3270156/">treatment for clinical conditions from depression to chronic pain</a> – while already being used in <a href="http://foc.us/">consumer products</a> and by <a href="http://www.wsj.com/articles/the-weird-world-of-brain-hacking-1447096569">do-it-yourselfers</a> to allegedly self-induce changes in mental state and ability.</p>
<p>Crude as current capabilities using TMS and tDCS are, they are forcing people to think about the responsible development and use of technologies which have the ability to potentially change behavior, personality and thinking ability, at the flick of a switch. And the ethical questions they raise are far from straightforward.</p>
<p>For instance, should students be allowed to take exams while using tDCS? Should teachers be able to use tDCS in the classroom? Should TMS be used to prevent a soldier’s moral judgment from interfering with military operations?</p>
<p>These and similar questions grapple with what is already possible. Complex as they are, they pale against the challenges emerging neurotechnologies are likely to raise.</p>
<h2>Preparing now for what’s to come</h2>
<p>As research leads to an increasingly sophisticated and fine-grained understanding of how our brains function, related neurotechnologies are likely to become equally sophisticated. As they do, our abilities to precisely control function, thinking, behavior and personality, will extend far beyond what is currently possible.</p>
<p>To get a sense of the emerging ethical and social challenges such capabilities potentially raise, consider this speculative near-future scenario:</p>
<p>Imagine that in a few years’ time, the UC Berkeley neural dust has been successfully miniaturized and combined with optogenetics, allowing thousands of micrometer-sized devices to be seeded through someone’s brain that are capable of monitoring and influencing localized brain functions. Now imagine this network of neural transceivers is wirelessly connected to an external computer, and from there, to the internet.</p>
<p>Such a network – a crude foreshadowing of science fiction author <a href="http://www.goodreads.com/author/show/5807106.Iain_M_Banks">Iain M. Banks</a>’ “neural lace” (a concept that has <a href="http://www.newsweek.com/elon-musk-neural-lace-ai-artificial-intelligence-465638">already grabbed the attention of Elon Musk</a>) – would revolutionize the detection and treatment of neurological conditions, potentially improving quality of life for millions of people. It would enable external devices to be controlled through thought, effectively integrating networked brains into the Internet of Things. It could help overcome restricted physical abilities for some people. And it would potentially provide unprecedented levels of cognitive enhancement, by allowing people to interface directly with cloud-based artificial intelligence and other online systems. </p>
<p>Think Apple’s Siri or Amazon’s Echo hardwired into your brain, and you begin to get the idea.</p>
<p>Yet this neurotech – which is almost within reach of current technological capabilities – would not be risk-free. These risks could be social – a growing socioeconomic divide perhaps between those who are neuro-enhanced and those who are not. Or they could be related to privacy and autonomy – maybe the ability of employers and law enforcement to monitor, and even alter, thoughts and feelings. The innovation might threaten personal well-being and societal cohesion through (hypothetical) cyber substance abuse, where direct-to-brain code replaces psychoactive substances. It could make users highly vulnerable to neurological cyberattacks.</p>
<p>Of course, predicting and responding to possible future risks is fraught with difficulties, and depends as much on who considers what a risk (and to whom) as it does the capabilities of emerging technologies to do harm. Yet it’s hard to avoid the likely disruptive potential of near-future neurotechnologies. Thus the urgent need to address – as a society – what we want the future of brain technologies to look like.</p>
<p>Moving forward, the ethical and responsible development of emerging brain technologies will require new thinking, along with considerable investment, in what might go wrong, and how to avoid it. Here, we can learn from thinking about responsible and ethical innovation that has come to light around <a href="https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA">recombinant DNA</a>, <a href="https://cns.asu.edu/viri">nanotechnology</a>, <a href="https://experimentearth.org/">geoengineering</a> and other cutting-edge areas of science and technology. </p>
<p>To develop future brain technologies both successfully and responsibly, we need to do so in ways that avoid potential pitfalls while not stifling innovation. We need approaches that ensure ordinary people can easily find out how these technologies might affect their lives – and they must have a say in how they’re used.</p>
<p>All this won’t necessarily be easy – responsible innovation rarely is. But through initiatives like this week’s NAS workshop and others, we have the opportunity to develop brain technologies that are profoundly beneficial, without getting caught up in an ethical minefield.</p><img src="https://counter.theconversation.com/content/65215/count.gif" alt="The Conversation" width="1" height="1" />
<h4 class="border">Disclosure</h4><p class="fine-print"><em><span>Andrew Maynard is a member of the ASU School for the Future of Innovation in Society, which is co-organizing the September 15-16 workshop on responsible innovation in brain science. </span></em></p>How will neurotech evolve? An NAS workshop this week focuses on social and ethical opportunities and challenges we face both now and down the road.Andrew Maynard, Director, Risk Innovation Lab, Arizona State UniversityLicensed as Creative Commons – attribution, no derivatives.