tag:theconversation.com,2011:/us/topics/moral-rights-36397/articlesmoral rights – The Conversation2020-10-27T12:14:51Ztag:theconversation.com,2011:article/1304532020-10-27T12:14:51Z2020-10-27T12:14:51ZIf a robot is conscious, is it OK to turn it off? The moral implications of building true AIs<figure><img src="https://images.theconversation.com/files/365306/original/file-20201023-13-z1yhk3.jpg?ixlib=rb-1.1.0&rect=42%2C30%2C1390%2C907&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What do you owe a faithful android like Data?</span> <span class="attribution"><a class="source" href="http://tng.trekcore.com/hd/thumbnails.php?album=145&page=7">CBS</a></span></figcaption></figure><p>In the <a href="https://www.imdb.com/title/tt0092455/">“Star Trek: The Next Generation”</a> episode <a href="https://www.youtube.com/watch?v=vjuQRCG_sUw">“The Measure of a Man,”</a> Data, an android crew member of the Enterprise, is to be dismantled for research purposes unless Captain Picard can argue that Data deserves the same rights as a human being. Naturally the question arises: What is the basis upon which something has rights? What gives an entity moral standing?</p>
<p>The philosopher <a href="https://uchv.princeton.edu/people/peter-singer">Peter Singer</a> argues that <a href="https://press.princeton.edu/books/paperback/9780691150697/the-expanding-circle">creatures that can feel pain or suffer have a claim</a> to moral standing. He argues that nonhuman animals have moral standing, since they can feel pain and suffer. Limiting it to people would be a form of speciesism, something akin to racism and sexism.</p>
<p>Without endorsing Singer’s line of reasoning, we might wonder if it can be extended further to an android robot like Data. It would require that Data can either feel pain or suffer. And how you answer that depends on how you understand consciousness and intelligence.</p>
<p>As real artificial intelligence technology advances toward Hollywood’s imagined versions, the question of moral standing grows more important. If AIs have moral standing, <a href="https://scholar.google.com/citations?hl=en&user=p8IBbFgAAAAJ&view_op=list_works&citft=1&citft=2&citft=3&email_for_op=anand.vaidya%40sjsu.edu&gmla=AJsN-F5dgp1wqST6325SGkx3GDfsuDj1T0bjxLMYTYACMHnsI9bz6KE47rKKwPP6_QhT3W8pQ75gTI-HE5UKm6Yuy-xDaIxMhTCW0fteFvhSyYxWd8lbRRiIB3UJa9Ae_ICCLAhpkgmnLy8Fb5MqDWpLfZI3lUJn79B3uWEmyfktBXWwdP9BWQvE2dmyfOZw6RKZ_ysSudgdzzT2zzxIVbVSxbvi_KwU_rBpHCllTxkWfvgkbF3hzX1HdNN6hPcmqO5mWgyxAro2">philosophers like me</a> reason, it could follow that they have a right to life. That means you cannot simply dismantle them, and might also mean that people shouldn’t interfere with their pursuing their goals.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Kasparov at a chessboard with no person opposite" src="https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=402&fit=crop&dpr=1 600w, https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=402&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=402&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=505&fit=crop&dpr=1 754w, https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=505&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=505&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Garry Kasparov was beaten by Deep Blue, an AI with a very deep intelligence in one narrow niche.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/world-chess-champion-garry-kasparov-makes-a-move-07-may-in-news-photo/51654330">Stan Honda/AFP via Getty Images</a></span>
</figcaption>
</figure>
<h2>Two flavors of intelligence and a test</h2>
<p>IBM’s <a href="https://doi.org/10.1016/S0004-3702(01)00129-1">Deep Blue chess machine</a> was successfully trained to beat grandmaster Gary Kasparov. But it could not do anything else. This computer had what’s called domain-specific intelligence.</p>
<p>On the other hand, there’s the kind of intelligence that allows for the ability to do a variety of things well. It is called domain-general intelligence. It’s what lets people cook, ski and raise children – tasks that are related, but also very different.</p>
<p>Artificial general intelligence, AGI, is the term for machines that have domain-general intelligence. Arguably no machine has yet demonstrated that kind of intelligence. This summer, a startup called <a href="https://openai.com/">OPENAI</a> released a new version of its <a href="https://www.cs.ubc.ca/%7Eamuham01/LING530/papers/radford2018improving.pdf">Generative Pre-Training</a> language model. GPT-3 is a natural-language-processing system, trained to read and write so that it can be easily understood by people.</p>
<p><a href="http://dailynous.com/2020/07/30/philosophers-gpt-3/">It drew immediate notice</a>, not just because of its impressive ability to mimic stylistic flourishes and put together <a href="https://theconversation.com/a-language-generation-programs-ability-to-write-articles-produce-code-and-compose-poetry-has-wowed-scientists-145591">plausible content</a>, but also because of how far it had come from a previous version. Despite this impressive performance, GPT-3 <a href="https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/">doesn’t actually know anything</a> beyond how to string words together in various ways. AGI remains quite far off.</p>
<p>Named after pioneering AI researcher Alan Turing, the <a href="https://plato.stanford.edu/entries/turing-test/">Turing test</a> helps determine when an AI is intelligent. Can a person conversing with a hidden AI tell whether it’s an AI or a human being? If he can’t, then for all practical purposes, the AI is intelligent. But this test says nothing about whether the AI might be conscious.</p>
<h2>Two kinds of consciousness</h2>
<p>There are <a href="http://www.nyu.edu/gsas/dept/philo/faculty/block/">two parts</a> to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.</p>
<p>In contrast, there’s also access consciousness. That’s the ability to report, reason, behave and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.</p>
<p><a href="https://doi.org/10.1177/1073858416673817">Blindsight nicely illustrates the difference</a> between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted – an example of access consciousness without phenomenal consciousness.</p>
<p>Data is an android. How do these distinctions play out with respect to him?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Still from Star Trek: The Next Generation" src="https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=445&fit=crop&dpr=1 600w, https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=445&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=445&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=559&fit=crop&dpr=1 754w, https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=559&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=559&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Do Data’s qualities grant him moral standing?</span>
<span class="attribution"><a class="source" href="http://tng.trekcore.com/hd/thumbnails.php?album=42&page=16">CBS</a></span>
</figcaption>
</figure>
<h2>The Data dilemma</h2>
<p>The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.</p>
<p>Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.</p>
<p>He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.</p>
<p>However, Data most likely lacks phenomenal consciousness - he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness – can grab the pen – but across all his senses he lacks phenomenal consciousness.</p>
<p>Now, if Data doesn’t feel pain, at least one of the reasons Singer offers for giving a creature moral standing is not fulfilled. But Data might fulfill the other condition of being able to suffer, even without feeling pain. Suffering might not require phenomenal consciousness the way pain essentially does.</p>
<p>For example, what if suffering were also defined as the idea of being thwarted from pursuing a just cause without causing harm to others? Suppose Data’s goal is to save his crewmate, but he can’t reach her because of damage to one of his limbs. Data’s reduction in functioning that keeps him from saving his crewmate is a kind of nonphenomenal suffering. He would have preferred to save the crewmate, and would be better off if he did.</p>
<p>In the episode, the question ends up resting not on whether Data is self-aware – that is not in doubt. Nor is it in question whether he is intelligent – he easily demonstrates that he is in the general sense. What is unclear is whether he is phenomenally conscious. Data is not dismantled because, in the end, his human judges cannot agree on the significance of consciousness for moral standing.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Artist's concept of wall-shaped binary codes making neuron-like connections" src="https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=324&fit=crop&dpr=1 600w, https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=324&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=324&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=407&fit=crop&dpr=1 754w, https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=407&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=407&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">When the 1s and 0s add up to a moral being.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/artificial-intelligence-neural-network-royalty-free-image/647837760">ktsimage/iStock via Getty Images Plus</a></span>
</figcaption>
</figure>
<h2>Should an AI get moral standing?</h2>
<p>Data is kind – he acts to support the well-being of his crewmates and those he encounters on alien planets. He obeys orders from people and appears unlikely to harm them, and he seems to <a href="https://theconversation.com/after-75-years-isaac-asimovs-three-laws-of-robotics-need-updating-74501">protect his own existence</a>. For these reasons he appears peaceful and easier to accept into the realm of things that have moral standing. </p>
<p>But what about <a href="https://www.youtube.com/watch?v=YbEWJXld3Ig">Skynet</a> in the <a href="https://www.youtube.com/watch?v=k64P4l2Wmeg">“Terminator”</a> movies? Or the worries recently expressed by <a href="https://www.tesla.com/elon-musk">Elon Musk</a> about <a href="https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html">AI being more dangerous than nukes</a>, and by <a href="https://www.hawking.org.uk">Stephen Hawking</a> on <a href="https://www.bbc.com/news/technology-30290540">AI ending humankind</a>?</p>
<p>[<em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>.]</p>
<p>Human beings don’t lose their claim to moral standing just because they act against the interests of another person. In the same way, you can’t automatically say that just because an AI acts against the interests of humanity or another AI it doesn’t have moral standing. You might be justified in fighting back against an AI like Skynet, but that does not take away its moral standing. If moral standing is given in virtue of the capacity to nonphenomenally suffer, then Skynet and Data both get it even if only Data wants to help human beings.</p>
<p>There are no artificial general intelligence machines yet. But now is the time to consider what it would take to grant them moral standing. How humanity chooses to answer the question of moral standing for nonbiological creatures will have big implications for how we deal with future AIs – whether kind and helpful like Data, or set on destruction, like Skynet.</p><img src="https://counter.theconversation.com/content/130453/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anand Vaidya does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Philosophers say now is the time to mull over what qualities should grant an artificially intelligent machine moral standing.Anand Vaidya, Associate Professor of Philosophy, San José State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/997792018-07-23T14:35:49Z2018-07-23T14:35:49ZThere is little moral basis for cannabis consumption remaining a crime<figure><img src="https://images.theconversation.com/files/228566/original/file-20180720-142423-odxlkn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">shutterstock</span> </figcaption></figure><p>Recent high-profile <a href="https://www.theguardian.com/commentisfree/2018/jun/18/drug-laws-epilepsy-cannabis-oil-billy-caldwell-sajid-javid">media coverage</a> has prompted public recognition that cannabis in particular forms <a href="https://www.independent.co.uk/news/health/medical-cannabis-legalise-uk-prescribe-epilepsy-dame-sally-davies-doctors-a8429231.html">can have beneficial medical effects</a> for some conditions such as epilepsy.</p>
<p>There are two main chemicals found in the plant that are used in medical cannabis – Tetrahydrocannabinol (THC), which is the psychoactive element that produces the high, and Cannabidiol (CBD) which has no psychoactive effects. Medical cannabis has a higher CBD content so there is no THC-induced euphoria, which is what recreational users of cannabis are after.</p>
<p>Cannabis use for whatever reason is illegal in the UK, although recently licences have been issued for treatment of people with severe forms of epilepsy; medical cannabis can <a href="https://www.theguardian.com/commentisfree/2018/feb/19/war-on-drugs-medical-cannabis-children-alfie-dingley">reduce the frequency and severity</a> of seizures. There is also a plethora of <a href="https://www.theguardian.com/commentisfree/2018/feb/23/criminalise-cannabis-pain-bill-reform">anecodotal evidence</a> that cannabis has successfully eased the symptoms of other conditions such as multiple sclerosis, Parkinson’s and cancer. </p>
<p>This raises a philosophical question that is crucially important when looking at public policy in areas such as drugs: when is it justifiable for the state to prohibit and punish particular sorts of behaviour?</p>
<p>It is wrong if someone is punished for a crime they did not commit. It is also wrong if someone is punished for an action that shouldn’t be a crime in the first place, whether or not they are guilty of that crime. It would surely be wrong, then, to try to conduct a fair trial for an alleged crime unless it is fair and just that the alleged action is actually a crime.</p>
<p>For instance, it would be hard to justify giving someone a fair trial for, say, committing adultery or consuming a particular drug unless it is fair and just that it is a crime to commit adultery or take that drug.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/SXwWzaQ9Aiw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Liberty</h2>
<p>In his famous essay On Liberty, philosopher <a href="http://www.bbc.co.uk/history/historic_figures/mill_john_stuart.shtml">John Stuart Mill</a> offers a <a href="https://socialsciences.mcmaster.ca/econ/ugcm/3ll3/mill/liberty.pdf">moral justification</a> for legally prohibiting and punishing particular actions.</p>
<p>He rejects the idea that public opinion can settle the matter. What he calls “the tyranny of the majority” is for him a subtle kind of oppression. He asks: what are “… the nature and limits of the power which can be legitimately exercised by society over the individual?” According to Mill: “The only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others.” He specifies that: </p>
<blockquote>
<p>His own good, either physical or moral, is not a sufficient warrant. He cannot rightfully be compelled to do or forbear because it will be better for him to do so, because it will make him happier, because, in the opinion of others, to do so would be wise, or even right.</p>
</blockquote>
<p>We may challenge people in such circumstances, according to Mill, and try to persuade them of the error of their ways. But as long as they are rational adults acting voluntarily, we should allow them to make their own mistakes. Only actions that harm other people should be crimes, according to Mill. That said, not all harmful actions should, in his view, be crimes.</p>
<p>Mill is aware that any of our actions might indirectly affect and possibly harm other people:</p>
<blockquote>
<p>With regard to the … constructive injury which a person causes to society, by conduct which neither violates any specific duty to the public… or to any individual except himself, the inconvenience is one society can afford to bear for the sake of the greater good of human freedom.</p>
</blockquote>
<p>One way of expressing the point is to say that there is a difference between harming people and harming them wrongfully. Not all harm that we suffer is an infringement of our moral rights.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/228577/original/file-20180720-142420-kpxlus.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/228577/original/file-20180720-142420-kpxlus.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=885&fit=crop&dpr=1 600w, https://images.theconversation.com/files/228577/original/file-20180720-142420-kpxlus.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=885&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/228577/original/file-20180720-142420-kpxlus.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=885&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/228577/original/file-20180720-142420-kpxlus.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1112&fit=crop&dpr=1 754w, https://images.theconversation.com/files/228577/original/file-20180720-142420-kpxlus.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1112&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/228577/original/file-20180720-142420-kpxlus.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1112&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Philosopher John Stuart Mill argued that only actions that harm others should be considered crimes.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/john-stuart-mill-18061873-252141700">Shutterstock</a></span>
</figcaption>
</figure>
<p>For instance, it would be beside the point to claim that because such drug takers are likely to become ill and indirectly affect other people adversely through, say, their need for medical treatment by the NHS, it should be a criminal offence to consume cannabis.</p>
<p>As citizens, we do not have a moral duty to act in such ways that the policies devised by politicians remain affordable and feasible. Rather, politicians should devise policies that are affordable and feasible, given how people actually behave. </p>
<p>To punch someone on the nose is not only harmful it is wrongful. People have a moral duty not to punch us on the nose and we have a corresponding moral right not to be punched. However, we do not have a moral right to demand that others refrain from doing anything that might require medical treatment or any other sort of publicly financed services.</p>
<h2>A sense of proportion</h2>
<p>Much of our current legislation is not in accordance with Mill’s principle. We punish people for taking drugs that are harmful to them. The more harmful the drugs, the more severe our punishments. The punishments, particularly if they involve prison, are likely to be just as harmful (or even more harmful) as the drugs themselves. The cost of the imprisonment is likely to be more of a burden to society than the cost of prisoners’ crimes. This all does seem very curious.</p>
<p>But objections might be made to Mill’s position. The prohibition regarding cannabis might possibly be morally justifiable on quite different grounds from those rejected by Mill. There might be a moral justification other than that suggested by Mill for making particular actions crimes. </p>
<p>For instance, what constitutes “harm” is debatable. Some might think that he does not convincingly suggest how we should distinguish between that which is wrongfully harmful and deserving of legal punishment, and that which is merely harmful. It might, for example, turn out that the activities of prominent and energetic Brexiteers or Remainers turn out to be far more harmful than those of, say, pickpockets and burglars. But it does not follow that such campaigners should be prosecuted as criminals. </p>
<p>Some actions such as, say, the defilement of corpses or voyeurism, where the people who are being watched remain unaware, might reasonably be crimes whether or not they cause harm. Perhaps not all crimes have victims.</p>
<p>Still, whether or not his argument is totally satisfactory, Mill’s “harm principle” offers a good starting point for a consideration of the crucially important but neglected question of the moral basis of the criminal law. And particularly when it comes to the issue of cannabis consumption.</p><img src="https://counter.theconversation.com/content/99779/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Hugh McLachlan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Society persists in criminalising the use of cannabis but it is morally indefensible.Hugh McLachlan, Professor Emeritus of Applied Philosophy, Glasgow Caledonian UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/726372017-03-02T01:39:37Z2017-03-02T01:39:37ZDoes empathy have limits?<figure><img src="https://images.theconversation.com/files/158985/original/image-20170301-5504-1l7vjh2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Why do we lack empathy in certain situations?</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/holacomovai/8690837710/in/photolist-eeYPgu-6bssfy-WfXSo-4rycrv-psTCc-a52yDz-goSZN-k9VH1-5z3Bpm-4KfZH-7P6YtV-92KAH5-6kXXhq-8WVpL-8e8ru-Wg3dC-Xgjr6-Wcjg6-9rEXt-kGKW-CpW6a-hww1gu-KfHnw-7N9rTf-rhBB-Jemgn-HYNGHo-dW2Ghr-4FDPeP-qttUeF-KjBEu-3hgHgV-7NWxTq-McdGa-qaS8n8-frT2r-9D1tQf-b8Q62-7jevcK-4CYFqX-4hS6QN-fAyuhM-8ruKk-86CUDN-8yWCjZ-kGhhdc-kufpY7-A5oQt7-dR5jgq-anUXRe">PROFrancisco Schmidt</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>Is it possible to run out of empathy?</p>
<p>That’s the question many are <a href="http://news.psu.edu/story/430603/2016/10/17/impact/ask-ethicist-how-important-empathy-us-presidential-election">asking</a> in the wake of the U.S. presidential election. Thousands have marched on streets and airports to encourage others to expand their empathy for women, minorities and refugees. Others have argued that <a href="https://www.nytimes.com/2016/05/08/opinion/sunday/a-confession-of-liberal-intolerance.html">liberals lack empathy</a> for the plight of rural Americans.</p>
<p>Against this backdrop, some scholars have recently come out against empathy, saying that it is <a href="http://bostonreview.net/forum/paul-bloom-against-empathy">overhyped</a>, <a href="http://www.nytimes.com/2011/09/30/opinion/brooks-the-limits-of-empathy.html">unimportant</a> and, worse, <a href="https://www.wsj.com/articles/the-perils-of-empathy-1480689513">dangerous</a>. They make this recommendation because empathy appears to be limited and biased in ethically problematic ways.</p>
<p>As psychologists who study empathy, we disagree. </p>
<p>Based on advances in the science of empathy, we suggest that limits on empathy are more apparent than real. While empathy appears limited, these limits reflect our own goals, values and choices; they do not reflect limits to empathy itself. </p>
<h2>The ‘dark side’ of empathy</h2>
<p>Over the past several years, a <a href="http://dx.doi.org/10.1016/j.tics.2016.11.004">number</a> <a href="http://journal.sjdm.org/7303a/jdm7303a.htm?version=meter+at+null&module=meter-Links&pgtype=article&contentId=&mediaId=&referrer=&priority=true&action=click&contentCollection=meter-links-click">of scholars</a>, including <a href="https://hbr.org/2016/01/the-limits-of-empathy">psychologists</a> and <a href="http://dx.doi.org/10.1111/j.2041-6962.2011.00069.x">philosophers</a>, have made arguments that empathy is morally problematic. </p>
<p>For example, in a recently published and thought-provoking book, <a href="https://www.harpercollins.com/9780062339355/against-empathy">“Against Empathy,”</a> psychologist <a href="http://psychology.yale.edu/people/paul-bloom">Paul Bloom</a> highlights how empathy, so often touted for its positive outcomes, may have biases and limitations that make it a <a href="http://www.sciencedirect.com/science/article/pii/S1364661316301930">poor guide</a> for everyday life. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/158983/original/image-20170301-5540-gkyx5a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/158983/original/image-20170301-5540-gkyx5a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/158983/original/image-20170301-5540-gkyx5a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/158983/original/image-20170301-5540-gkyx5a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/158983/original/image-20170301-5540-gkyx5a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/158983/original/image-20170301-5540-gkyx5a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/158983/original/image-20170301-5540-gkyx5a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">What explains our feelings of empathy toward some and not others?</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/15216811@N06/13495437084/in/photolist-myxDgJ-78A5w6-g2rBrE-fMwuhH-9rTuCZ-6XWW5t-fMwQDc-fMPgHj-dsE6bM-fMwsVc-egHtpz-fMwNMt-961qoC-fMwsWt-fMPa4Y-fMwEDH-fMwMHt-fMwxnr-fMwF7z-fMPgFC-fMwNLK-FwDTui-fMPa19-fMPfw7-fMP4sw-72FwpS-fMPnJs-fMwz3r-fMwQsn-fMPqYN-fMwwqV-fMP9pU-fMwvSx-fMwNRp-fMwRuF-fMwtZH-fMwP8Z-fMwHVK-fMPraj-fMwKrv-fMPpuy-fMP4CU-fMwt4V-fMwrBB-fMwrvD-fMPrtA-fMwuMX-fMwq4K-fMPfu3-fMP2i5">N i c o l a</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Bloom claims that empathy is a limited-capacity resource, like a fixed pie or fossil fuel that quickly runs out. He suggests that, </p>
<blockquote>
<p>“We are not psychologically constituted to feel toward a stranger as we feel toward someone we love. We are <a href="https://books.google.com/books?id=op67CwAAQBAJ&printsec=frontcover&dq=against+empathy&hl=en&sa=X&ved=0ahUKEwjB09XQtLHSAhUOfiYKHSVMA-0Q6AEIHDAA">not capable of feeling</a> a million times worse about the suffering of a million than about the suffering of one.” </p>
</blockquote>
<p>Such views are echoed by other scholars as well. For example, psychologist <a href="http://psychology.uoregon.edu/profile/pslovic/">Paul Slovic</a> <a href="https://www.nytimes.com/2015/12/06/opinion/the-arithmetic-of-compassion.html?_r=0">suggests</a> that “we are psychologically wired to help only one person at a time.” </p>
<p>Similarly, philosopher <a href="https://www.gc.cuny.edu/Page-Elements/Academics-Research-Centers-Initiatives/Doctoral-Programs/Philosophy/Faculty-Bios/Jesse-Prinz">Jesse Prinz</a> has argued that empathy is prejudiced and leads to “<a href="http://onlinelibrary.wiley.com/doi/10.1111/j.2041-6962.2011.00069.x/full">moral myopia</a>,” making us act more favorably toward people we have empathy for, even if this is unfair.</p>
<p>For the same reason, psychologist <a href="http://www.kellogg.northwestern.edu/faculty/directory/waytz_adam.aspx">Adam Waytz</a> suggests that empathy can “<a href="https://hbr.org/2016/01/the-limits-of-empathy">erode ethics</a>.” Slovic, in fact, suggests that “our capacity to feel sympathy for <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0100115">people in need appears limited</a>, and this form of compassion fatigue can lead to apathy and inaction.”</p>
<h2>Are there limits?</h2>
<p>The empathy that the scholars above are arguing against is emotional: It’s known scientifically as <a href="http://dx.doi.org/10.1016/j.tics.2014.04.008">“experience sharing,”</a> which is defined as feeling the same emotions that other people are feeling. </p>
<p>This emotional empathy is thought to be limited for two main reasons: First, empathy appears to be less sensitive <a href="http://psycnet.apa.org/journals/psp/100/1/1/">to large numbers of victims</a>, as in genocides and natural disasters. Second, empathy appears to be less sensitive to the suffering of people from <a href="http://www.sciencedirect.com/science/article/pii/S002210311400095X">different racial or ideological groups</a> than our own. </p>
<p>In other words, in their view, empathy seems to put the spotlight on single victims who look or think like us.</p>
<h2>Empathy is a choice</h2>
<p>We agree that empathy can often be weaker in response to mass suffering and to people who are dissimilar from us. But the science of empathy actually suggests a different reason for why such deficits emerge.</p>
<p>As a growing body of evidence shows, it’s not that we are unable to feel empathy for mass suffering or people from other groups, but rather that sometimes we “choose” not to. In other words, you <a href="https://www.nytimes.com/2015/07/12/opinion/sunday/empathy-is-actually-a-choice.html">choose the expanse</a> of your empathy.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/158987/original/image-20170301-5525-1fvxz2y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/158987/original/image-20170301-5525-1fvxz2y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=381&fit=crop&dpr=1 600w, https://images.theconversation.com/files/158987/original/image-20170301-5525-1fvxz2y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=381&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/158987/original/image-20170301-5525-1fvxz2y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=381&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/158987/original/image-20170301-5525-1fvxz2y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=479&fit=crop&dpr=1 754w, https://images.theconversation.com/files/158987/original/image-20170301-5525-1fvxz2y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=479&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/158987/original/image-20170301-5525-1fvxz2y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=479&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Empathy is a choice.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/cuppini/2519976746/in/photolist-4QFxhW-it8cXt-wF5cW-344QUT-9UykwK-7Pzozk-8VmV66-ongdet-zGq77-6MkuSp-6xWo2D-4qRaXP-wgZAf-s8mtHv-5jFUy1-oEA2-agNv1-ESdP1-SctYST-5emMsM-JrAJGp-feEcgD-5rprcy-4c1Jk5-eCA45-pVFaBZ-wyPhY-bqQMhR-6yTu7N-NppZT-4cour1-dqYPfz-r5KV3k-dVfWEJ-7MLUJP-ndGEos-nubLkS-aTtz2n-rHgQj-ddtWBm-5EU22y-nvZZE-idqiU-8VEwjj-7LKmL6-ypHTz-3ixpWv-6Jw1J-auHaMM-aoU5RY">Riccardo Cuppini</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<p>There is evidence that we choose where to set the limits of empathy. For example, whereas people usually feel less empathy for multiple victims (versus a single victim), this <a href="http://dx.doi.org/10.1037/a0021643">tendency reverses</a> when you convince people that empathy won’t require costly donations of money or time. Similarly, people show less empathy for mass suffering when they think their helping won’t make any difference or impact, but this pattern goes away when they think they can <a href="http://dx.doi.org/10.1016/j.obhdp.2016.06.001">make a difference</a>. </p>
<p>This tendency also varies depending on an individual’s <a href="http://www.sjdm.org/journal/13/13321a/jdm13321a.pdf">moral beliefs</a>. For instance, people who live in “collectivist cultures,” such as <a href="http://dx.doi.org/10.1037/a0039708">Bedouin individuals</a>, do not feel less empathy for mass suffering. This is perhaps because people in such cultures value the suffering of the collective.</p>
<p>This can also be changed temporarily, which makes it seem even more like a choice. For <a href="http://dx.doi.org/10.1037/a0039708">example</a>, people who are primed to think about individualistic values show less empathic behaviors for mass suffering, but people who are primed to think about collectivistic values do not. </p>
<p>We argue that if indeed there was a limit on empathy for mass suffering, it should not vary based upon costs, efficacy or values. Instead, it looks like the effect shifts based on what people want to feel. We suggest that the same point applies to the tendency to feel less empathy for people different from us: Whether we extend <a href="http://www.pnas.org/content/113/1/80.short">empathy to people who are dissimilar from us</a> depends on what we want to feel. </p>
<p>In other words, the scope of empathy is flexible. Even people thought to lack empathy, such as psychopaths, appear <a href="http://socialneuro.psych.utoronto.ca/understanding%20everyday%20psychopathy.pdf">able to empathize</a> if they want to do so.</p>
<h2>Why seeing limits to empathy is problematic</h2>
<p>Empathy critics usually do not talk about choice in a logically consistent manner; sometimes they say individuals choose and direct empathy willfully, yet other times say we have no control over the limits of empathy. </p>
<p>These are different claims with different ethical implications. </p>
<p>The problem is that arguments against empathy treat it as a biased emotion. In doing so, these arguments mistake the consequences of our own choices to avoid empathy as something inherently wrong with empathy itself.</p>
<p>We suggest that empathy only appears limited; seeming insensitivity to mass suffering and dissimilar others is not built into empathy, but reflect the choices we make. These limits result from general trade-offs that people make as they balance some goals against others. </p>
<p>We suggest caution in using terms like “limits” and “capacity” when talking about empathy. This rhetoric can create a self-fulfilling prophecy: When people believe that empathy is a depleting resource, they exert <a href="http://psycnet.apa.org/journals/psp/107/3/475/">less empathic effort</a> and engage in more <a href="http://journals.sagepub.com/doi/abs/10.1177/1948550615604453">dehumanization</a>. </p>
<p>So, framing empathy as a fixed pie misses the mark – scientifically and practically. </p>
<h2>What are the alternatives?</h2>
<p>Even if we accepted that empathy has fixed limits – which we dispute, given the scientific evidence – what other psychological processes could we rely upon to be effective decision-makers?</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/158988/original/image-20170301-5507-1cbq9zl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/158988/original/image-20170301-5507-1cbq9zl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=417&fit=crop&dpr=1 600w, https://images.theconversation.com/files/158988/original/image-20170301-5507-1cbq9zl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=417&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/158988/original/image-20170301-5507-1cbq9zl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=417&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/158988/original/image-20170301-5507-1cbq9zl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=524&fit=crop&dpr=1 754w, https://images.theconversation.com/files/158988/original/image-20170301-5507-1cbq9zl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=524&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/158988/original/image-20170301-5507-1cbq9zl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=524&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Is compassion less biased?</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/paullew/2768127350/in/photolist-5dBnPC-aS9KUr-4mevp4-4B4k5x-oZD338-d38KW3-4uA29J-5i7owC-7CRs2W-qFw8Mx-pQPmzj-6aGsgM-52P7Xi-cQ63h3-qzhFJc-Sk48oy-ehuRgu-qsvjz8-pgTPm8-5ETRzQ-bwS2SS-aMMhuB-6YWCNH-dVmcMr-g7eYFX-5WeEK1-g7f2sZ-9QTDTg-us8kV-g7eZ1V-9Ef4oW-krV9cX-7fb5g2-7zvjv-3ckcQP-as2zRv-ctNAuq-dpDe9R-6w8jqw-cXWQ6w-diY4HX-7WQaxo-isqh8-9pj4xo-8Dhin5-7sWBqQ-8QTSsR-Mf2it-KLXDa-a3sUvX">Fr Lawrence Lew, O.P.</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p><a href="http://psycnet.apa.org/journals/emo/16/8/1107/">Some scholars suggest</a> that <a href="https://academic.oup.com/scan/article-abstract/9/6/873/1669505/Differential-pattern-of-functional-brain">compassion is not as costly</a> or biased as empathy, and so should be considered more trustworthy. However, compassion can also be insensitive to <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1592459">mass suffering</a> and people from <a href="http://journals.sagepub.com/doi/abs/10.1207/s15327957pspr0901_1">other groups</a>, just like empathy.</p>
<p>Another candidate is reasoning, which is considered to be free from emotional biases. Perhaps, cold deliberation over costs and benefits, appealing to long-term consequences, may be effective. Yet this view overlooks how <a href="http://journals.sagepub.com/doi/abs/10.1177/1754073912445820">emotions can be rational</a> and reasoning can be motivated to support desired conclusions. </p>
<p>We see this in politics, and people use utilitarian principles differently depending on their political beliefs, suggesting <a href="http://www.sciencedirect.com/science/article/pii/S0079742108004106">principles can be biased</a> too. For example, a study found that conservative participants were <a href="http://psycnet.apa.org/psycinfo/2009-20839-006">more willing to accept consequential trade-offs</a> of civilian lives lost during wartime when they were Iraqi instead of American. Reasoning may not be as objective and unbiased as empathy critics claim.</p>
<h2>Whose standard of morality are we using?</h2>
<p>Even if reasoning was objective and didn’t play favorites, is this what we want from morality? Research suggests that for <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2184440">many cultures</a>, it can be immoral if you don’t focus on the immediate few who share your beliefs or blood.</p>
<p>For example, <a href="https://books.google.com/books?hl=en&lr=&id=zUiFCwAAQBAJ&oi=fnd&pg=PA61&dq=waytz+empathy&ots=j5RmKSQO1a&sig=n2vYf0_xLQmJbfManTtw_YpsJUA">some research</a> finds that whereas liberals extend empathy and moral rights to strangers, conservatives are more likely to reserve empathy for their families and friends. Some people think that morality should not play favorites; but others think that morality should be applied more strongly to family and friends.</p>
<p>So even if empathy did have fixed limits, it doesn’t follow that this makes it morally problematic. Many view impartiality as the ideal, but many don’t. So, empathy takes on a specific set of goals given a choice of a standard. </p>
<p>By focusing on apparent flaws in empathy and not digging deeper into how they emerge, arguments against empathy end up denouncing the wrong thing. Human reasoning is sometimes flawed and it sometimes leads us off course; this is especially the case when we have skin in the game. </p>
<p>In our view, it is these flaws in human reasoning that are the real culprits here, not empathy, which is a mere output of these more complex computations. Our real focus should be on how people balance competing costs and benefits when deciding whether to feel empathy. </p>
<p>Such an analysis makes being against empathy seem superficial. Arguments against empathy rely on an <a href="http://journals.sagepub.com/doi/abs/10.1177/1754073912445820">outdated dualism</a> between biased emotion and objective reason. But the science of empathy suggests that what may matter more is our own values and choices. Empathy may be limited sometimes, but only if you want it to be that way.</p><img src="https://counter.theconversation.com/content/72637/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>C. Daryl Cameron receives funding from the National Science Foundation.</span></em></p><p class="fine-print"><em><span>Michael Inzlicht receives funding from an Insight Grant from the Social Sciences and Humanities Research Council (SSHRC) of Canada, from an Insight Development Grant also from SSHRC, and from a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada. </span></em></p><p class="fine-print"><em><span>William A. Cunningham receives funding from the National Science Foundation, the Social Sciences and Humanities Research Council, and the Natural Sciences and Engineering Research Council.</span></em></p>Research shows empathy itself does not have any limits. If it appears limited, it is because of people’s goals, values and choices.C. Daryl Cameron, Assistant Professor of Psychology and Research Associate in the Rock Ethics Institute, Penn StateMichael Inzlicht, Professor of Psychology, Management, University of TorontoWilliam A. Cunningham, Professor of Psychology, University of TorontoLicensed as Creative Commons – attribution, no derivatives.