tag:theconversation.com,2011:/id/topics/deep-blue-32712/articlesDeep Blue – The Conversation2023-05-17T16:24:50Ztag:theconversation.com,2011:article/2048232023-05-17T16:24:50Z2023-05-17T16:24:50ZChatGPT can’t think – consciousness is something entirely different to today’s AI<figure><img src="https://images.theconversation.com/files/525711/original/file-20230511-10496-d2f8t7.jpg?ixlib=rb-1.1.0&rect=16%2C0%2C5631%2C3988&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/low-polygon-brain-wireframe-mesh-on-686888194">Illus_man / Shutterstock</a></span></figcaption></figure><p>There has been shock around the world at the rapid rate of progress with <a href="https://openai.com/blog/chatgpt">ChatGPT</a> and other artificial intelligence created with what’s known as large language models (LLMs). These systems can produce text that seems to display thought, understanding and even creativity.</p>
<p>But can these systems really think and understand? This is not a question that can be answered through technological advance, but careful philosophical analysis and argument tells us the answer is no. And without working through these philosophical issues, we will never fully comprehend the dangers and benefits of the AI revolution.</p>
<p>In 1950, the father of modern computing, Alan Turing, <a href="https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/t_article.pdf">published a paper</a> which laid out a way of determining whether a computer thinks. This is now called “the Turing test”. Turing imagined a human being engaged in conversation with two interlocutors hidden from view: one another human being, the other a computer. The game is to work out which is which. </p>
<p>If a computer can fool 70% of judges in a five-minute conversation into thinking it’s a person, the computer passes the test. Would passing the Turing test – something which now seems imminent – show that an AI has achieved thought and understanding? </p>
<h2>Chess challenge</h2>
<p>Turing dismissed this question as hopelessly vague, and replaced it with a pragmatic definition of “thought”, whereby to think just means passing the test.</p>
<p>Turing was wrong, however, when he said the only clear notion of “understanding” is the purely behavioural one of passing his test. Although this way of thinking now dominates cognitive science, there is also a clear, everyday notion of “understanding” that’s tied to consciousness. To understand in this sense is to consciously grasp some truth about reality. </p>
<p>In 1997, the <a href="https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov">Deep Blue AI beat chess grandmaster Garry Kasparov</a>. On a purely behavioural conception of understanding, Deep Blue had knowledge of chess strategy that surpasses any human being. But it was not conscious: it didn’t have any feelings or experiences. </p>
<p>Humans consciously understand the rules of chess and the rationale of a strategy. Deep Blue, in contrast, was an unfeeling mechanism that had been trained to perform well at the game. Likewise, ChatGPT is an unfeeling mechanism that has been trained on huge amounts of human-made data to generate content that seems like it was written by a person.</p>
<p>It doesn’t consciously understand the meaning of the words it’s spitting out. If “thought” means the act of conscious reflection, then ChatGPT has no thoughts about anything. </p>
<h2>Time to pay up</h2>
<p>How can I be so sure that ChatGPT isn’t conscious? In the 1990s, neuroscientist Christof Koch <a href="https://www.newscientist.com/article/mg23831830-300-consciousness-how-were-solving-a-mystery-bigger-than-our-minds/">bet philosopher David Chalmers a case of fine wine</a> that scientists would have entirely pinned down the “neural correlates of consciousness” in 25 years. </p>
<p>By this, he meant they would have identified the forms of brain activity necessary and sufficient for conscious experience. It’s about time Koch paid up, as there is zero consensus that this has happened.</p>
<p>This is because consciousness can’t be observed by looking inside your head. In their attempts to find a connection between brain activity and experience, neuroscientists must rely on their subjects’ testimony, or on external markers of consciousness. But there are multiple ways of interpreting the data.</p>
<figure class="align-center ">
<img alt="Chess player" src="https://images.theconversation.com/files/525841/original/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/525841/original/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/525841/original/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/525841/original/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/525841/original/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/525841/original/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/525841/original/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Unlike computers, humans consciously understand the rules of chess and the underlying strategy.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/concentrated-beautiful-girl-playing-chess-on-1740304379">LightField Studios / Shutterstock</a></span>
</figcaption>
</figure>
<p><a href="https://philarchive.org/rec/MICCPA-6">Some scientists</a> believe there is a close connection between consciousness and reflective cognition – the brain’s ability to access and use information to make decisions. This leads them to think that the brain’s prefrontal cortex – where the high-level processes of acquiring knowledge take place – is essentially involved in all conscious experience. Others deny this, <a href="https://www.frontiersin.org/articles/10.3389/fncel.2019.00302/full">arguing instead that</a> it happens in whichever local brain region that the relevant sensory processing takes place. </p>
<p>Scientists have good understanding of the brain’s basic chemistry. We have also made progress in understanding the high-level functions of various bits of the brain. But we are almost clueless about the bit in-between: how the high-level functioning of the brain is realised at the cellular level.</p>
<p>People get very excited about the potential of scans to reveal the workings of the brain. But fMRI (functional magnetic resonance imaging) has a very low resolution: <a href="https://www.nature.com/articles/nature06976">every pixel</a> on a brain scan corresponds to 5.5 million neurons, which means there’s a limit to how much detail these scans are able to show.</p>
<p>I believe progress on consciousness will come when we understand better how the brain works.</p>
<h2>Pause in development</h2>
<p>As I argue in my forthcoming book <a href="https://global.oup.com/academic/product/why-the-purpose-of-the-universe-9780198883760?lang=en&cc=jp">“Why? The Purpose of the Universe”</a>, consciousness must have evolved because it made a behavioural difference. Systems with consciousness must behave differently, and hence survive better, than systems without consciousness. </p>
<p>If all behaviour was determined by underlying chemistry and physics, natural selection would have no motivation for making organisms conscious; we would have evolved as unfeeling survival mechanisms. </p>
<p>My bet, then, is that as we learn more about the brain’s detailed workings, we will precisely identify which areas of the brain embody consciousness. This is because those regions will exhibit behaviour that can’t be explained by currently known chemistry and physics. Already, <a href="http://www.wiringthebrain.com/2019/09/beyond-reductionism-systems-biology.html">some neuroscientists</a> are seeking potential new explanations for consciousness to supplement the basic equations of physics. </p>
<p>While the processing of LLMs is now too complex for us to fully understand, we know that it could in principle be predicted from known physics. On this basis, we can confidently assert that ChatGPT is not conscious. </p>
<p>There are many dangers posed by AI, and I fully support the recent call by tens of thousands of people, including tech leaders Steve Wozniak and Elon Musk,<a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/"> to pause</a> development to address safety concerns. The potential for fraud, for example, is immense. However, the argument that near-term descendants of current AI systems will be super-intelligent, and hence a major threat to humanity, is premature. </p>
<p>This doesn’t mean current AI systems aren’t dangerous. But we can’t correctly assess a threat unless we accurately categorise it. LLMs aren’t intelligent. They are systems trained to give the outward appearance of human intelligence. Scary, but not that scary.</p><img src="https://counter.theconversation.com/content/204823/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Philip Goff does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Of the risks posed by AI, overtaking human intelligence isn’t an immediate concern.Philip Goff, Associate Professor of Philosophy, Durham UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1304532020-10-27T12:14:51Z2020-10-27T12:14:51ZIf a robot is conscious, is it OK to turn it off? The moral implications of building true AIs<figure><img src="https://images.theconversation.com/files/365306/original/file-20201023-13-z1yhk3.jpg?ixlib=rb-1.1.0&rect=42%2C30%2C1390%2C907&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What do you owe a faithful android like Data?</span> <span class="attribution"><a class="source" href="http://tng.trekcore.com/hd/thumbnails.php?album=145&page=7">CBS</a></span></figcaption></figure><p>In the <a href="https://www.imdb.com/title/tt0092455/">“Star Trek: The Next Generation”</a> episode <a href="https://www.youtube.com/watch?v=vjuQRCG_sUw">“The Measure of a Man,”</a> Data, an android crew member of the Enterprise, is to be dismantled for research purposes unless Captain Picard can argue that Data deserves the same rights as a human being. Naturally the question arises: What is the basis upon which something has rights? What gives an entity moral standing?</p>
<p>The philosopher <a href="https://uchv.princeton.edu/people/peter-singer">Peter Singer</a> argues that <a href="https://press.princeton.edu/books/paperback/9780691150697/the-expanding-circle">creatures that can feel pain or suffer have a claim</a> to moral standing. He argues that nonhuman animals have moral standing, since they can feel pain and suffer. Limiting it to people would be a form of speciesism, something akin to racism and sexism.</p>
<p>Without endorsing Singer’s line of reasoning, we might wonder if it can be extended further to an android robot like Data. It would require that Data can either feel pain or suffer. And how you answer that depends on how you understand consciousness and intelligence.</p>
<p>As real artificial intelligence technology advances toward Hollywood’s imagined versions, the question of moral standing grows more important. If AIs have moral standing, <a href="https://scholar.google.com/citations?hl=en&user=p8IBbFgAAAAJ&view_op=list_works&citft=1&citft=2&citft=3&email_for_op=anand.vaidya%40sjsu.edu&gmla=AJsN-F5dgp1wqST6325SGkx3GDfsuDj1T0bjxLMYTYACMHnsI9bz6KE47rKKwPP6_QhT3W8pQ75gTI-HE5UKm6Yuy-xDaIxMhTCW0fteFvhSyYxWd8lbRRiIB3UJa9Ae_ICCLAhpkgmnLy8Fb5MqDWpLfZI3lUJn79B3uWEmyfktBXWwdP9BWQvE2dmyfOZw6RKZ_ysSudgdzzT2zzxIVbVSxbvi_KwU_rBpHCllTxkWfvgkbF3hzX1HdNN6hPcmqO5mWgyxAro2">philosophers like me</a> reason, it could follow that they have a right to life. That means you cannot simply dismantle them, and might also mean that people shouldn’t interfere with their pursuing their goals.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Kasparov at a chessboard with no person opposite" src="https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=402&fit=crop&dpr=1 600w, https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=402&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=402&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=505&fit=crop&dpr=1 754w, https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=505&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/365311/original/file-20201023-16-14xtu5x.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=505&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Garry Kasparov was beaten by Deep Blue, an AI with a very deep intelligence in one narrow niche.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/world-chess-champion-garry-kasparov-makes-a-move-07-may-in-news-photo/51654330">Stan Honda/AFP via Getty Images</a></span>
</figcaption>
</figure>
<h2>Two flavors of intelligence and a test</h2>
<p>IBM’s <a href="https://doi.org/10.1016/S0004-3702(01)00129-1">Deep Blue chess machine</a> was successfully trained to beat grandmaster Gary Kasparov. But it could not do anything else. This computer had what’s called domain-specific intelligence.</p>
<p>On the other hand, there’s the kind of intelligence that allows for the ability to do a variety of things well. It is called domain-general intelligence. It’s what lets people cook, ski and raise children – tasks that are related, but also very different.</p>
<p>Artificial general intelligence, AGI, is the term for machines that have domain-general intelligence. Arguably no machine has yet demonstrated that kind of intelligence. This summer, a startup called <a href="https://openai.com/">OPENAI</a> released a new version of its <a href="https://www.cs.ubc.ca/%7Eamuham01/LING530/papers/radford2018improving.pdf">Generative Pre-Training</a> language model. GPT-3 is a natural-language-processing system, trained to read and write so that it can be easily understood by people.</p>
<p><a href="http://dailynous.com/2020/07/30/philosophers-gpt-3/">It drew immediate notice</a>, not just because of its impressive ability to mimic stylistic flourishes and put together <a href="https://theconversation.com/a-language-generation-programs-ability-to-write-articles-produce-code-and-compose-poetry-has-wowed-scientists-145591">plausible content</a>, but also because of how far it had come from a previous version. Despite this impressive performance, GPT-3 <a href="https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/">doesn’t actually know anything</a> beyond how to string words together in various ways. AGI remains quite far off.</p>
<p>Named after pioneering AI researcher Alan Turing, the <a href="https://plato.stanford.edu/entries/turing-test/">Turing test</a> helps determine when an AI is intelligent. Can a person conversing with a hidden AI tell whether it’s an AI or a human being? If he can’t, then for all practical purposes, the AI is intelligent. But this test says nothing about whether the AI might be conscious.</p>
<h2>Two kinds of consciousness</h2>
<p>There are <a href="http://www.nyu.edu/gsas/dept/philo/faculty/block/">two parts</a> to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.</p>
<p>In contrast, there’s also access consciousness. That’s the ability to report, reason, behave and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.</p>
<p><a href="https://doi.org/10.1177/1073858416673817">Blindsight nicely illustrates the difference</a> between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted – an example of access consciousness without phenomenal consciousness.</p>
<p>Data is an android. How do these distinctions play out with respect to him?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Still from Star Trek: The Next Generation" src="https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=445&fit=crop&dpr=1 600w, https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=445&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=445&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=559&fit=crop&dpr=1 754w, https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=559&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/365309/original/file-20201023-17-pg6o2n.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=559&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Do Data’s qualities grant him moral standing?</span>
<span class="attribution"><a class="source" href="http://tng.trekcore.com/hd/thumbnails.php?album=42&page=16">CBS</a></span>
</figcaption>
</figure>
<h2>The Data dilemma</h2>
<p>The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.</p>
<p>Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.</p>
<p>He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.</p>
<p>However, Data most likely lacks phenomenal consciousness - he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness – can grab the pen – but across all his senses he lacks phenomenal consciousness.</p>
<p>Now, if Data doesn’t feel pain, at least one of the reasons Singer offers for giving a creature moral standing is not fulfilled. But Data might fulfill the other condition of being able to suffer, even without feeling pain. Suffering might not require phenomenal consciousness the way pain essentially does.</p>
<p>For example, what if suffering were also defined as the idea of being thwarted from pursuing a just cause without causing harm to others? Suppose Data’s goal is to save his crewmate, but he can’t reach her because of damage to one of his limbs. Data’s reduction in functioning that keeps him from saving his crewmate is a kind of nonphenomenal suffering. He would have preferred to save the crewmate, and would be better off if he did.</p>
<p>In the episode, the question ends up resting not on whether Data is self-aware – that is not in doubt. Nor is it in question whether he is intelligent – he easily demonstrates that he is in the general sense. What is unclear is whether he is phenomenally conscious. Data is not dismantled because, in the end, his human judges cannot agree on the significance of consciousness for moral standing.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Artist's concept of wall-shaped binary codes making neuron-like connections" src="https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=324&fit=crop&dpr=1 600w, https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=324&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=324&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=407&fit=crop&dpr=1 754w, https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=407&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/365312/original/file-20201023-15-1m9bf77.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=407&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">When the 1s and 0s add up to a moral being.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/artificial-intelligence-neural-network-royalty-free-image/647837760">ktsimage/iStock via Getty Images Plus</a></span>
</figcaption>
</figure>
<h2>Should an AI get moral standing?</h2>
<p>Data is kind – he acts to support the well-being of his crewmates and those he encounters on alien planets. He obeys orders from people and appears unlikely to harm them, and he seems to <a href="https://theconversation.com/after-75-years-isaac-asimovs-three-laws-of-robotics-need-updating-74501">protect his own existence</a>. For these reasons he appears peaceful and easier to accept into the realm of things that have moral standing. </p>
<p>But what about <a href="https://www.youtube.com/watch?v=YbEWJXld3Ig">Skynet</a> in the <a href="https://www.youtube.com/watch?v=k64P4l2Wmeg">“Terminator”</a> movies? Or the worries recently expressed by <a href="https://www.tesla.com/elon-musk">Elon Musk</a> about <a href="https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html">AI being more dangerous than nukes</a>, and by <a href="https://www.hawking.org.uk">Stephen Hawking</a> on <a href="https://www.bbc.com/news/technology-30290540">AI ending humankind</a>?</p>
<p>[<em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>.]</p>
<p>Human beings don’t lose their claim to moral standing just because they act against the interests of another person. In the same way, you can’t automatically say that just because an AI acts against the interests of humanity or another AI it doesn’t have moral standing. You might be justified in fighting back against an AI like Skynet, but that does not take away its moral standing. If moral standing is given in virtue of the capacity to nonphenomenally suffer, then Skynet and Data both get it even if only Data wants to help human beings.</p>
<p>There are no artificial general intelligence machines yet. But now is the time to consider what it would take to grant them moral standing. How humanity chooses to answer the question of moral standing for nonbiological creatures will have big implications for how we deal with future AIs – whether kind and helpful like Data, or set on destruction, like Skynet.</p><img src="https://counter.theconversation.com/content/130453/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anand Vaidya does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Philosophers say now is the time to mull over what qualities should grant an artificially intelligent machine moral standing.Anand Vaidya, Associate Professor of Philosophy, San José State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/784102017-05-26T13:05:17Z2017-05-26T13:05:17ZGoogle’s latest Go victory shows machines are no longer just learning, they’re teaching<p>Just over 20 years ago was the first time a <a href="https://theconversation.com/twenty-years-on-from-deep-blue-vs-kasparov-how-a-chess-match-started-the-big-data-revolution-76882">computer beat a human world champion</a> in a chess match, when IBM’s Deep Blue supercomputer beat Gary Kasparov in a narrow victory of 3½ games to 2½. Just under a decade later, machines were deemed to have conquered the game of chess when Deep Fritz, a piece of software running on a desktop PC, <a href="http://en.chessbase.com/post/kramnik-vs-deep-fritz-computer-wins-match-by-4-2">beat 2006 world champion Vladimir Kramnik</a>. Now the ability of computers to take on humanity has taken a step further by mastering the far more complex board game Go, with Google’s AlphaGo program <a href="https://www.nytimes.com/2017/05/25/business/google-alphago-defeats-go-ke-jie-again.html?_r=0">beating world number one</a> Ke Jie twice in a best-of-three series.</p>
<p>This signifcant milestone shows just how far computers have come in the past 20 years. DeepBlue’s victory at chess showed machines could rapidly process huge amounts of information, <a href="https://theconversation.com/twenty-years-on-from-deep-blue-vs-kasparov-how-a-chess-match-started-the-big-data-revolution-76882">paving the way for the big data revolution</a> we see today. But AlphaGo’s triumph represents the development of real artificial intelligence by a machine that can recognise patterns and learn the best way to respond to them. What’s more, it may signify a new evolution in AI, where computers not only learn how to beat us but can start to teach us as well.</p>
<p>Go is considered one of the <a href="https://www.quora.com/Is-Go-the-most-complicated-2-player-board-game">world’s most complex board games</a>. Like chess, it’s a game of strategy but it also has several key differences that make it much harder for a computer to play. The rules are relatively simple but the strategies involved to play the game are highly complex. It is also much harder to calculate the end position and winner in the game of Go. </p>
<p>It has a larger board (a 19x19 grid rather than an 8x8 one) and an unlimited number of pieces, so there are many more ways that the board can be arranged. Whereas chess pieces start in set positions and can each make a limited number of moves each turn, Go starts with a blank board and players can place a piece in any of the 361 free spaces. Each game takes on average twice as many turns as chess and there are six times as many legal move options per turn.</p>
<p>Each of these features means you can’t build a Go program using the same techniques as for chess machines. These tend to use a “brute force” approach of analysing the potential of large numbers of possible moves to select the best one. Feng-Hsiung Hsu, one of the key contributors to the DeepBlue team, argued in 2007 that <a href="http://spectrum.ieee.org/computing/software/cracking-go">applying this strategy to Go</a> would require a million-fold increase in processing speed over DeepBlue so a computer could analyse 100 trillion positions per second.</p>
<h2>Learning new moves</h2>
<p>The strategy used by AlphaGo’s creators at Google subsidiary DeepMind was to create an artificial intelligence program that could learn how to identify favourable moves from useless ones. This meant it wouldn’t have to analyse all the possible moves that could be made at each turn. In preparation for its first match against professional Go player Lee Sedol, AlphaGo analysed <a href="https://www.wired.com/2017/05/googles-alphago-levels-board-games-power-grids">around 300m moves</a> made by professional Go players. It then used what are called deep learning and reinforcement learning techniques to <a href="https://blog.google/topics/machine-learning/what-we-learned-in-seoul-with-alphago/">develop its own ability</a> to identify favourable moves.</p>
<p>But this wasn’t enough to enable AlphaGo to defeat highly ranked human players. The software was run on custom microchips specifically designed for machine learning, known as tensor processing units (TPUs), to support very large numbers of computations. This seems similar to the approach used by the designers of DeepBlue, who also developed custom chips for high-volume computation. The stark difference, however, is that DeepBlue’s chips could only be used for playing chess. AlphaGo’s chips run Google’s general-purpose AI framework, Tensorflow, and are also used to <a href="https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html">power other Google services</a> such as Street View and optimisation tasks in the firm’s data centres.</p>
<h2>Lesson for us all</h2>
<p>The other thing that has changed since DeepBlue’s victory is the respect that humans have for their computer opponents. When playing chess computers, it was common for the human players to adopt so-called <a href="https://www.chess.com/blog/ramin18/anti-computer-tactics-gaming">anti-computer tactics</a>. This involves making conservative moves to prevent the computer from evaluating positions effectively.</p>
<p>In his first match against AlphaGo, however, Ke Jie, adopted tactics that had previously been used by his opponent to <a href="https://www.wired.com/2017/05/revamped-alphago-wins-first-game-chinese-go-grandmaster/">beat it at its own game</a>. Although this attempt failed, it demonstrates a change in approach for leading human players taking on computers. Instead of trying to stifle the machine, they have begun trying to learn from how it played in the past.</p>
<p>In fact, the machine has already influenced the professional game of Go, with grandmasters <a href="https://deepmind.com/blog/exploring-mysteries-alphago/">adopting AlphaGo’s strategy</a> during their tournament matches. This machine has taught humanity something new about a game it has been playing for over 2,500 years, liberating us from the experience of millennia.</p>
<p>What then might the future hold for the AI behind AlphaGo? The success of DeepBlue <a href="https://theconversation.com/twenty-years-on-from-deep-blue-vs-kasparov-how-a-chess-match-started-the-big-data-revolution-76882">triggered rapid developments</a> that have directly impacted the techniques applied in big data processing. The benefit of the technology used to implement AlphaGo is that it can already be applied to other problems that require pattern identification.</p>
<p>For example, the same techniques have been applied to <a href="https://www.wired.com/2017/05/using-ai-detect-cancer-not-just-cats/">the detection of cancer</a> and to create robots that can learn to do <a href="https://www.wired.com/2017/01/googles-go-playing-machine-opens-door-robots-learn/">things like open doors</a>, among <a href="https://www.wired.com/2017/01/googles-go-playing-machine-opens-door-robots-learn/">many other applications</a>. The underlying framework used in AlphaGo, Google’s TensorFlow, has been made freely available for developers and researchers to build new machine-learning programs using standard computer hardware. </p>
<p>More excitingly, combining it with the many computers available through the internet cloud creates the promise of delivering <a href="https://cloud.google.com/tpu/">machine-learning supercomputing</a>. When this technology matures then the potential will exist for the creation of self-taught machines in wide-ranging roles that can support complex decision-making tasks. Of course, what may be even more profound are the social impacts of having machines that not only teach themselves but teach us in the process.</p><img src="https://counter.theconversation.com/content/78410/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mark Robert Anderson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Google’s AlphaGo victory over the human world champion shows how far things have come since DeepBlue.Mark Robert Anderson, Professor in Computing and Information Systems, Edge Hill UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/768822017-05-11T14:12:50Z2017-05-11T14:12:50ZTwenty years on from Deep Blue vs Kasparov: how a chess match started the big data revolution<p>On the <a href="http://en.chessbase.com/post/komodo-8-deep-blue-revisited-part-three">seventh move of the crucial deciding game</a>, black made what some now consider to have been a critical error. When black mixed up the moves for the <a href="http://www.chessgames.com/perl/chessgame?gid=1070917">Caro-Kann defence</a>, white took advantage and created a new attack by sacrificing a knight. In just 11 more moves, white had built a position so strong that black had no option but to concede defeat. The loser reacted with a cry of foul play – one of the most strident accusations of cheating ever made in a tournament, which ignited an international conspiracy theory that is <a href="http://en.chessbase.com/post/deep-blue-s-cheating-move">still questioned 20 years later</a>.</p>
<p>This was no ordinary game of chess. It’s not uncommon for a defeated player to accuse their opponent of cheating – but in this case the loser was the then world chess champion, Garry Kasparov. The victor was even more unusual: IBM supercomputer, Deep Blue.</p>
<p>In defeating Kasparov on May 11 1997, Deep Blue made history as the first computer to beat a world champion in a six-game match under standard time controls. Kasparov had won the first game, lost the second and then drawn the following three. When Deep Blue took the match by winning the final game, Kasparov refused to believe it. </p>
<p>In an echo of the <a href="http://www.slate.com/blogs/atlas_obscura/2015/08/20/the_turk_an_supposed_chess_playing_robot_was_a_hoax_that_started_an_early.html">chess automaton hoaxes</a> of the 18th and 19th centuries, Kasparov argued that the computer must actually have been controlled by a real grand master. He and his supporters believed that Deep Blue’s playing was too human to be that of a machine. Meanwhile, to many of those in the outside world who were convinced by the computer’s performance, it appeared that artificial intelligence had reached a stage where it could outsmart humanity – at least at a game that had long been considered too complex for a machine.</p>
<hr>
<p><strong><em>Listen to an <a href="https://theconversation.com/twenty-years-on-from-deep-blue-vs-kasparov-how-a-chess-match-started-the-big-data-revolution-podcast-88607">audio version</a> of this article on The Conversation’s <a href="https://theconversation.com/uk/topics/in-depth-out-loud-podcast-46082">In Depth Out Loud</a> podcast.</em></strong></p>
<iframe src="https://player.acast.com/5e29c8205aa745a456af58c8/episodes/5e29c8365aa745a456af58d6?theme=default&cover=1&latest=1" frameborder="0" width="100%" height="110px" allow="autoplay"></iframe>
<hr>
<p>Yet the reality was that Deep Blue’s victory was precisely because of its rigid, unhumanlike commitment to cold, hard logic in the face of Kasparov’s emotional behaviour. This wasn’t artificial (or real) intelligence that demonstrated our own creative style of thinking and learning, but the application of simple rules on a grand scale.</p>
<p>What the match did do, however, was signal the start of a societal shift that is gaining increasing speed and influence today. The kind of vast data processing that Deep Blue relied on is now found in nearly every corner of our lives, from the <a href="http://www.computerweekly.com/feature/How-the-financial-services-sector-uses-big-data-analytics-to-predict-client-behaviour">financial systems</a> that dominate the economy to <a href="http://www.bbc.co.uk/news/business-26613909">online dating apps</a> that try to find us the perfect partner. What started as student project, helped usher in the age of big data.</p>
<h2>A human error</h2>
<p>The basis of Kasparov’s claims went all the way back to a move the computer made in the second game of the match, the first in the competition that Deep Blue won. Kasparov had played to encourage his opponent to take a “poisoned” pawn, a sacrificial piece positioned to entice the machine into making a fateful move. This was a tactic that Kasparov had used <a href="http://www.nytimes.com/1993/09/15/arts/declining-a-draw-short-loses-to-a-kasparov-counterattack.html">against human opponents</a> in the past.</p>
<p>What surprised Kasparov was <a href="http://www.thechessmind.net/blog/2012/7/14/a-look-back-at-deeper-blue-vs-kasparov-1997game-2.html">Deep Blue’s subsequent move</a>. Kasparov called it “human-like”. John Nunn, the English chess grandmaster, described it as <a href="http://en.chessbase.com/post/komodo-8-deep-blue-revisited-part-one">“stunning” and “exceptional”</a>. The move left Kasparov riled and ultimately thrown off his strategy. He was so perturbed that he eventually walked away, forfeiting the game. Worse still, he never recovered, drawing the next three games and then making the error that led to his demise in the final game.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=598&fit=crop&dpr=1 600w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=598&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=598&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=751&fit=crop&dpr=1 754w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=751&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=751&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Open file.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/Open_file">Wikipedia</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>The move was based on the strategic advantage that a player can gain from creating an <a href="https://www.chess.com/article/view/open-files3">open file</a>, a column of squares on the board (as viewed from above) that contains no pieces. This can create an attacking route, typically for rooks or queens, free from pawns blocking the way. During <a href="http://archive.computerhistory.org/projects/chess/related_materials/oral-history/hsu.oral_history.2005.102644995/hsu.oral_history_transcript.2005.102644995.pdf">training with the grand master Joel Benjamin</a>, the Deep Blue team had learnt there was sometimes a more strategic option than opening a file and then moving a rook to it. Instead, the tactic involved piling pieces onto the file and then choosing when to open it up.</p>
<p>When the programmers learned this, they rewrote Deep Blue’s code to incorporate the moves. During the game, the computer used the position of having a potential open file to put pressure on Kasparov and force him into defending on every move. That psychological advantage eventually wore Kasparov down. </p>
<p>From the moment that Kasparov lost, <a href="http://www.bbc.co.uk/programmes/p03rq51h">speculation and conspiracy theories</a> started. The conspiracists claimed that IBM had used human intervention during the match. IBM denied this, stating that, in keeping with the rules, the only human intervention came between games to rectify bugs that had been identified during play. They also rejected the claim that the programming had been adapted to Kasparov’s style of play. Instead they had relied on the computer’s ability to search through huge numbers of possible moves.</p>
<p>IBM’s refusal of Kasparov’s request for a rematch and the subsequent dismantling of Deep Blue did nothing to quell suspicions. IBM also delayed the release of the <a href="https://www.research.ibm.com/deepblue/watch/html/c.shtml">computer’s detailed logs</a>, as Kasparov had also requested, until after the decommissioning. But the subsequent <a href="http://en.chessbase.com/post/komodo-8-deep-blue-revisited-part-one">detailed analysis</a> of the logs has added new dimensions to the story, including the understanding that Deep Blue made several big mistakes.</p>
<p>There has since been speculation that Deep Blue only triumphed because <a href="https://www.cnet.com/news/did-a-bug-in-deep-blue-lead-to-kasparovs-defeat/">of a bug in the code</a> during the first game. One of <a href="http://fivethirtyeight.com/features/rage-against-the-machines/">Deep Blue’s designers</a> has said that when a glitch prevented the computer from selecting one of the moves it had analysed, it instead made a random move that Kasparov misinterpreted as a deeper strategy.</p>
<p>He managed to win the game and the bug was fixed for the second round. But the world champion was supposedly so shaken by what he saw as the machine’s superior intelligence that he was unable to recover his composure and played too cautiously from then on. He even missed the chance to come back from the open file tactic when Deep Blue made a “<a href="http://en.chessbase.com/post/komodo-8-deep-blue-revisited-part-one">terrible blunder</a>”.</p>
<p>Whichever of these accounts of Kasparov’s reactions to the match are true, they point to the fact that his defeat was at least partly down to the frailties of human nature. He over-thought some of the machine’s moves and became unecessarily anxious about its abilities, making errors that ultimately led to his defeat. Deep Blue didn’t possess anything like the artificial intelligence techniques that today have helped computers win at far more complex games, <a href="https://theconversation.com/googles-go-triumph-is-a-milestone-for-artificial-intelligence-research-53762">such as Go</a>.</p>
<p>But even if Kasparov was more intimidated than he needed to be, there is no denying the stunning achievements of the team that created Deep Blue. Its ability to take on the world’s best human chess player was built on some incredible computing power, which launched the IBM supercomputer programme that has paved the way for some of the leading-edge technology available in the world today. What makes this even more amazing is the fact that the project started not as an exuberant project from one of the largest computer manufacturers but as a student thesis in the 1980s. </p>
<h2>Chess race</h2>
<p>When Feng-Hsiung Hsu arrived in the US from Taiwan in 1982, he can’t have imagined that he would become part of an <a href="https://books.google.co.uk/books?id=zV0W4729UqkC&printsec=frontcover&dq=%22Behind+Deep+Blue:+Building+the+Computer+that+Defeated+the+World+Chess+Champion,%22+rivalry">intense rivalry</a> between two teams that spent almost a decade vying to build the world’s best chess computer. Hsu had come to Carnegie Mellon University (CMU) in Pennsylvania to study the design of the integrated circuits that make up microchips, but he also held a longstanding <a href="http://archive.computerhistory.org/projects/chess/related_materials/oral-history/hsu.oral_history.2005.102644995/hsu.oral_history_transcript.2005.102644995.pdf">interest in computer chess</a>. He attracted the attention of the developers of Hitech, the computer that in 1988 would become the <a href="http://www.nytimes.com/1988/09/26/nyregion/for-first-time-a-chess-computer-outwits-grandmaster-in-tournament.html">first to beat a chess grand master</a>, and was asked to assist with hardware design.</p>
<p>But Hsu soon fell out with the Hitech team after discovering what he saw as an architectural flaw in their proposed design. Together with several other PhD students, he began building his own computer known as ChipTest, drawing on the architecture of Bell Laboratory’s <a href="http://link.springer.com/chapter/10.1007%2F978-1-4757-1968-0_28">chess machine, Belle</a>. ChipTest’s custom technology used what’s known as “very large-scale integration” to combine thousands of transistors onto a single chip, allowing the computer to search through 500,000 chess moves each second.</p>
<p>Although the Hitech team had a head start, Hsu and his colleagues would soon overtake them with ChipTest’s successor. Deep Thought – named after the computer in Douglas Adams’ The Hitchhiker’s Guide to the Galaxy built to find the meaning of life – combined two of Hsu’s custom processors and could analyse 720,000 moves a second. This enabled it to win the 1989 World Computer Chess Championship without losing a single game.</p>
<p>But Deep Thought hit a road block later that year when it came up against (<a href="http://www.nytimes.com/1989/10/23/nyregion/kasparov-beats-chess-computer-for-now.html">and lost to</a>) the reigning world chess champion, one Garry Kasparov. To beat the best of humanity, Hsu and his team would need to go much further. Now, however, they had the backing of computing giant IBM. </p>
<p>Chess computers work by attaching a numerical value to the position of each piece on the board using a formula known as an “<a href="https://chessprogramming.wikispaces.com/Evaluation">evaluation function</a>”. These values can then be processed and searched to determine the best move to make. Early chess computers, such as Belle and Hitech, used multiple custom chips to run the evaluation functions and then combine the results together.</p>
<p>The problem was that the communication between the chips was slow and used up a lot of processing power. What Hsu did with ChipTest was to redesign and repackage the processors into a single chip. This removed a number of processing overheads such as off-chip communication and made possible huge increases in computational speed. Whereas Deep Thought could process 720,000 moves a second, Deep Blue used large numbers of processors running the same set of calculations simultaneously to analyse 100,000,000 moves a second.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=901&fit=crop&dpr=1 600w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=901&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=901&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1132&fit=crop&dpr=1 754w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1132&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1132&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">An imposing opponent.</span>
<span class="attribution"><span class="source">Jim Gardner/Flickr</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Increasing the number of moves the computer could process was important because chess computers have traditionally used what is known as “brute force” techniques. Human players <a href="http://www.csis.pace.edu/%7Ectappert/dps/pdf/ai-chess-deep.pdf">learn from past experience</a> to instantly rule out certain moves. Chess machines, certainly at that time, did not have that capability and instead had to rely on their ability to look ahead at what could happen for every possible move. They used brute force in analysing very large numbers of moves rather than focusing on certain types of move they already knew were most likely to work. Increasing the number of moves a machine could look at in a second gave it the time to look much further into the future at where different moves would take the game.</p>
<p>By February 1996, the IBM team were ready to take on Kasparov again, this time with Deep Blue. Although it became the first machine to beat a world champion in a game under regular time controls, Deep Blue <a href="http://content.time.com/time/subscriber/article/0,33009,984304,00.html">lost the overall match</a> 4-2. Its 100,000,000 moves a second still weren’t enough to beat the human ability to strategise.</p>
<p>To up the move count, the team began upgrading the machine by exploring how they could optimise large numbers of processors working in parallel – with great success. The final machine was a 30-processor supercomputer that, more importantly, controlled 480 custom intergrated circuits designed specifically to play chess. This custom design was what enabled the team to so highly optimise the parallel computing power across the chips. The result was a new version of Deep Blue (sometimes referred to as Deeper Blue) capable of searching around <a href="https://www.theguardian.com/theguardian/2011/may/12/deep-blue-beats-kasparov-1997">200,000,000 moves per second</a>. This meant it could explore how each possible strategy would play out <a href="http://www.sciencedirect.com/science/article/pii/S0004370201001291">up to 40 or more moves</a> into the future.</p>
<h2>Parallel revolution</h2>
<p>By the time the rematch took place in New York City in May 1997, public curiosity was huge. Reporters and television cameras swarmed around the board and were rewarded <a href="http://www.telegraph.co.uk/news/matt/9885264/From-the-archive-Chess-computer-beats-Kasparov-in-19-moves.html">with a story</a> when Kasparov stormed off following his defeat and cried foul at a press conference afterwards. But the publicity around the match also helped establish a greater understanding of how far computers had come. What most people still had no idea about was how the technology behind Deep Blue would help spread the influence of computers to almost ever aspect of society by transforming the way we use data.</p>
<p>Complex computer models are today used to underpin banks’ financial systems, to design better cars and aeroplanes, and to trial new drugs. Systems that mine large datasets (often known as “<a href="https://theconversation.com/explainer-what-is-big-data-13780">big data</a>”) to look for significant patterns are involved in <a href="https://www.theguardian.com/public-leaders-network/2014/apr/17/big-data-government-public-services-expert-views">planning public services</a> such as transport or healthcare, and enable companies to <a href="https://theconversation.com/the-future-of-online-advertising-is-big-data-and-algorithms-69297">target advertising</a> to specific groups of people. </p>
<p>These are highly complex problems that require rapid processing of large and complex datasets. Deep Blue gave scientists and engineers <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/">significant insight</a> into the massively parallel multi-chip systems that have made this possible. In particular they showed the capabilities of a general-purpose computer system that controlled a large number of custom chips designed for a specific application.</p>
<p>The science of <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/transform/">molecular dynamics</a>, for example, involves studying the physical movements of molecules and atoms. Custom chip designs have enabled computers to model molecular dynamics to look ahead to see how new drugs might react in the body, just like looking ahead at different chess moves. Molecular dynamic simulations have helped <a href="http://pubs.acs.org/doi/pdf/10.1021/acs.jmedchem.5b01684">speed up the development</a> of successful drugs, such as some of those <a href="https://bmcbiol.biomedcentral.com/articles/10.1186/1741-7007-9-71">used to treat HIV</a>.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Molecular modelling.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>For very broad applications, such as modelling <a href="https://www.research.ibm.com/deepblue/learn/html/e.5.shtml">financial systems</a> and <a href="https://www.research.ibm.com/deepblue/learn/html/e.4.shtml">data mining</a>, designing custom chips for an individual task in these areas would be prohibitively expensive. But the Deep Blue project helped develop the techniques to code and manage highly parallelised systems that split a problem over a large number of processors.</p>
<p>Today, many systems for processing large amounts of data rely on graphics processing units (GPUs) instead of custom-designed chips. These were originally designed to produce images on a screen but also handle information using lots of processors in parallel. So now they are often used in <a href="http://www.nvidia.com/object/what-is-gpu-computing.html">high-performance computers</a> running large data sets and to run powerful artificial intelligence tools such <a href="https://theconversation.com/what-powers-facebook-and-googles-ai-and-how-computers-could-mimic-brains-52232">Facebook’s digital assistant</a>. There are obvious similarities with Deep Blue’s architecture here: custom chips (built for graphics) controlled by general-purpose processors to drive efficiency in complex calculations.</p>
<p>The world of chess playing machines, meanwhile, has evolved since the Deep Blue victory. Despite his experience with Deep Blue, Kasparov agreed in 2003 to take on two of the most prominent chess machines, Deep Fritz and Deep Junior. And both times he managed to avoid a defeat, although he still made errors that forced him <a href="http://www.thechessdrum.net/tournaments/Kasparov-DeepJr/">into a draw</a>. However, both machines convincingly beat their human counterparts in the <a href="https://en.wikipedia.org/wiki/Human%E2%80%93computer_chess_matches#Man_vs_Machine_World_Team_Championship_.282004.E2.80.932005.29">2004 and 2005 Man vs Machine World Team Championships</a>.</p>
<p>Junior and Fritz marked a <a href="https://books.google.co.uk/books?id=KkQBCAAAQBAJ&pg=PA30&dq=chess+machines+junior+fritz&hl=en&sa=X&ved=0ahUKEwiJsfvl3-TTAhWnJcAKHVKtAOgQ6AEILTAB#v=onepage&q=chess%20machines%20junior%20fritz&f=false">change in the approach</a> to developing systems for computer chess. Whereas Deep Blue was a custom-built computer relying on the brute force of its processors to analyse millions of moves, these new chess machines were software programs that used learning techniques to minimise the searches needed. This can beat the brute force techniques using only a desktop PC.</p>
<p>But despite this advance, we still don’t have chess machines that resembles human intelligence in the way it plays the game – they don’t need to. And, if anything, the victories of Junior and Fritz further strengthen the idea that human players lose to computers, at least in part, because of their humanity. The humans made errors, became anxious and feared for their reputations. The machines, on the other hand, relentlessly applied logical calculations to the game in their attempts to win. One day we might have computers that truly replicate human thinking, but the story of the last 20 years has been the rise of systems that are superior precisely because they are machines.</p><img src="https://counter.theconversation.com/content/76882/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mark Robert Anderson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The in depth story of a student project that paved the way for a society-level shift in how we use computers.Mark Robert Anderson, Professor in Computing and Information Systems, Edge Hill UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/773832017-05-11T01:03:54Z2017-05-11T01:03:54ZComputers to humans: Shall we play a game?<figure><img src="https://images.theconversation.com/files/168795/original/file-20170510-21596-p2i8u6.png?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Artificial intelligence can bring many benefits to human gamers.</span> <span class="attribution"><a class="source" href="https://www.instagram.com/gamingartbysj/">Sam Jordan Belanger</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>Way back in the 1980s, a schoolteacher challenged me to write a computer program that played tic-tac-toe. I failed miserably. But just a couple of weeks ago, I explained to one of my computer science graduate students how to solve tic-tac-toe using the so-called “<a href="https://en.wikipedia.org/wiki/Minimax">Minimax algorithm</a>,” and it took us about an hour to write a program to do it. Certainly my coding skills have improved over the years, but computer science has come a long way too.</p>
<p>What seemed impossible just a couple of decades ago is startlingly easy today. In 1997, people were stunned when a chess-playing IBM computer named <a href="http://www.nytimes.com/1997/05/12/nyregion/swift-and-slashing-computer-topples-kasparov.html">Deep Blue beat international grandmaster Garry Kasparov</a> in a six-game match. In 2015, Google revealed that its DeepMind system had mastered several <a href="http://www.techrepublic.com/article/google-ai-beats-humans-at-more-classic-arcade-games-than-ever-before/">1980s-era video games</a>, including teaching itself a crucial winning strategy in “<a href="https://www.youtube.com/watch?v=V1eYniJ0Rnk">Breakout</a>.” In 2016, Google’s AlphaGo system beat a top-ranked Go player in a <a href="https://www.theatlantic.com/technology/archive/2016/03/the-invisible-opponent/475611/">five-game tournament</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/V1eYniJ0Rnk?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">An artificial intelligence system learns to play ‘Breakout.’</span></figcaption>
</figure>
<p>The quest for technological systems that can beat humans at games continues. In late May, AlphaGo will take on <a href="https://arstechnica.com/information-technology/2017/04/deepmind-alphago-go-ke-jie-china/">Ke Jie</a>, the best player in the world, among other opponents at the Future of Go Summit in Wuzhen, China. With increasing computing power, and improved engineering, computers can beat humans even at games we thought relied on human intuition, wit, deception or bluffing – like <a href="http://www.csd.cs.cmu.edu/news/carnegie-mellon-ai-takes-chinese-poker-players">poker</a>. I recently saw a video in which volleyball players practice their serves and spikes against <a href="https://www.youtube.com/watch?v=EHKv6lRRV10">robot-controlled</a> rubber arms trying to block the shots. One lesson is clear: When machines play to win, human effort is futile. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/EHKv6lRRV10?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Robots play volleyball.</span></figcaption>
</figure>
<p>This can be great: We want a perfect AI to drive our cars, and a tireless system looking for signs of cancer in X-rays. But when it comes to play, we don’t want to lose. Fortunately, AI can make games more fun, and perhaps even endlessly enjoyable.</p>
<h2>Designing games that never get old</h2>
<p>Today’s game designers – who write releases that <a href="http://www.businessinsider.com/here-are-the-top-10-highest-grossing-video-games-of-all-time-2012-6">earn more than a blockbuster movie</a> – see a problem: Creating an unbeatable artificial intelligence system is pointless. Nobody wants to play a game they have no chance of winning.</p>
<p>But people do want to play <a href="https://theconversation.com/the-future-is-in-interactive-storytelling-76772">games that are immersive, complex and surprising</a>. Even today’s best games become stale after a person plays for a while. The ideal game will engage players by adapting and reacting in ways that keep the game interesting, maybe forever.</p>
<p>So when we’re designing artificial intelligence systems, we should look not to the triumphant Deep Blues and AlphaGos of the world, but rather to the overwhelming success of massively multiplayer online games like “<a href="https://worldofwarcraft.com/en-us/">World of Warcraft</a>.” These sorts of games are graphically well-designed, but their key attraction is interaction. </p>
<p>It seems as if most people are not drawn to extremely difficult logical puzzles like chess and Go, but rather to meaningful connections and communities. The real challenge with these massively multi-player online games is not whether they can be beaten by intelligence (human or artificial), but rather how to keep the experience of playing them fresh and new every time.</p>
<h2>Change by design</h2>
<p>At present, game environments allow people lots of possible interactions with other players. The roles in a dungeon <a href="https://en.wikipedia.org/wiki/Raid_(gaming)">raiding party</a> are well-defined: Fighters take the damage, healers help them recover from their injuries and the fragile wizards cast spells from afar. Or think of “<a href="https://en.wikipedia.org/wiki/Portal_2">Portal 2</a>,” a game focused entirely on collaborating robots puzzling their way through a maze of cognitive tests.</p>
<p>Exploring these worlds together allows you to form common memories with your friends. But any changes to these environments or the underlying plots have to be made by human designers and developers.</p>
<p>In the real world, changes happen naturally, without supervision, design or manual intervention. Players learn, and living things adapt. Some organisms even <a href="http://dx.doi.org/10.1086/691101">co-evolve</a>, reacting to each other’s developments. (A similar phenomenon happens in a <a href="http://www.amnh.org/exhibitions/einstein/peace-and-war/nuclear-arms-race/">weapons technology arms race</a>.)</p>
<p>Computer games today lack that level of sophistication. And for that reason, I don’t believe developing an artificial intelligence that can play modern games will meaningfully advance AI research. </p>
<h2>We crave evolution</h2>
<p>A game worth playing is a game that is unpredictable because it adapts, a game that is ever novel because novelty is created by playing the game. Future games need to evolve. Their characters shouldn’t just react; they need to explore and learn to exploit weaknesses or cooperate and collaborate. <a href="http://www.livescience.com/474-controversy-evolution-works.html">Darwinian evolution and learning</a>, we understand, are the drivers of all novelty on Earth. It could be what <a href="https://theconversation.com/evolving-our-way-to-artificial-intelligence-54100">drives change in virtual environments</a> as well.</p>
<p>Evolution figured out how to create <a href="https://theconversation.com/understanding-the-four-types-of-ai-from-reactive-robots-to-self-aware-beings-67616">natural intelligence</a>. Shouldn’t we, instead of trying to code our way to AI, just evolve AI instead? Several labs – <a href="http://hintzelab.msu.edu/">including my own</a> and that of <a href="http://adamilab.msu.edu/">my colleague Christoph Adami</a> – are working on what is called “<a href="https://en.wikipedia.org/wiki/Neuroevolution">neuro-evolution</a>.”</p>
<p>In a computer, we simulate complex environments, like a road network or a biological ecosystem. We create virtual creatures and challenge them to evolve over hundreds of thousands of simulated generations. Evolution itself then develops the best drivers, or the best organisms at adapting to the conditions – those are the ones that survive. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/5lJuEW-5vr8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A neuro-evolution learns to drive a car.</span></figcaption>
</figure>
<p>Today’s AlphaGo is beginning this process, learning by continuously <a href="https://www.theguardian.com/technology/2016/jun/27/alphago-deepmind-ai-code-google">playing games against itself</a>, and by analyzing records of games played by top Go champions. But it does not learn while playing in the same way we do, experiencing unsupervised experimentation. And it doesn’t adapt to a particular opponent: For these computer players, the best move is the best move, regardless of an opponent’s style. </p>
<p>Programs that learn from experience are the next step in AI. They would make computer games much more interesting, and enable robots to not only function better in the real world, but to adapt to it on the fly.</p><img src="https://counter.theconversation.com/content/77383/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Arend Hintze receives funding from NSF BEACON Center for the Study of Evolution in Action Cooperative Agreement No. DBI-0939454, and received funding from Strength in Numbers Game Studio </span></em></p>Twenty years after Deep Blue beat Garry Kasparov at chess, artificial intelligence can make games more fun, and perhaps even endlessly enjoyable, if it learns to adapt.Arend Hintze, Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/676162016-11-14T01:40:10Z2016-11-14T01:40:10ZUnderstanding the four types of AI, from reactive robots to self-aware beings<figure><img src="https://images.theconversation.com/files/143746/original/image-20161028-15775-i00zp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Robots will need to teach themselves.</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/pic-350368274/">Robot reading via shutterstock.com</a></span></figcaption></figure><p>The common, and recurring, view of the latest breakthroughs in artificial intelligence research is that sentient and intelligent machines are just on the horizon. Machines understand verbal commands, distinguish pictures, drive cars and play games better than we do. How much longer can it be before they walk among us?</p>
<p>The new <a href="https://www.whitehouse.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf">White House report on artificial intelligence</a> takes an appropriately skeptical view of that dream. It says the next 20 years likely won’t see machines “exhibit broadly-applicable intelligence comparable to or exceeding that of humans,” though it does go on to say that in the coming years, “machines will reach and exceed human performance on more and more tasks.” But its assumptions about how those capabilities will develop missed some important points.</p>
<p>As an AI researcher, I’ll admit it was nice to have my own field highlighted at the highest level of American government, but the report focused almost exclusively on what I call “the boring kind of AI.” It dismissed in half a sentence my branch of AI research, into how evolution can help develop ever-improving AI systems, and how computational models can help us understand how our human intelligence evolved.</p>
<p>The report focuses on what might be called mainstream AI tools: machine learning and deep learning. These are the sorts of technologies that have been able to <a href="http://dx.doi.org/10.1016/S0004-3702(01)00129-1">play “Jeopardy!” well</a>, and <a href="http://dx.doi.org/10.1038/nature16961">beat human Go masters</a> at the most complicated game ever invented. These current intelligent systems are able to handle huge amounts of data and make complex calculations very quickly. But they lack an element that will be key to building the sentient machines we picture having in the future.</p>
<p>We need to do more than teach machines to learn. We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.</p>
<h2>Type I AI: Reactive machines</h2>
<p>The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. <a href="http://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/">Deep Blue, IBM’s chess-playing supercomputer</a>, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine. </p>
<p>Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.</p>
<p>But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.</p>
<p>This type of intelligence involves the computer <a href="https://www.youtube.com/watch?v=t3kXWSctj2Q">perceiving the world directly</a> and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that <a href="http://dx.doi.org/10.1016/0004-3702(91)90053-M">we should only build machines</a> like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world.</p>
<p>The current intelligent machines we marvel at either have no such concept of the world, or have a very limited and specialized one for its particular duties. The <a href="https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master/">innovation in Deep Blue’s design</a> was not to broaden the range of possible movies the computer considered. Rather, the developers found a way to narrow its view, to <a href="https://www.cnet.com/news/did-a-bug-in-deep-blue-lead-to-kasparovs-defeat/">stop pursuing some potential future moves</a>, based on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even more powerful computer to actually beat Kasparov.</p>
<p>Similarly, Google’s AlphaGo, which has beaten top human Go experts, can’t evaluate all potential future moves either. Its analysis method is more sophisticated than Deep Blue’s, using a <a href="http://pages.cs.wisc.edu/%7Ebolo/shipyard/neural/local.html">neural network</a> to evaluate game developments. </p>
<p>These methods do improve the ability of AI systems to play specific games better, but they can’t be easily changed or applied to other situations. These computerized imaginations have no concept of the wider world – meaning they can’t function beyond the specific tasks they’re assigned and are <a href="http://dx.doi.org/10.1109/CVPR.2015.7298640">easily fooled</a>. </p>
<p>They can’t interactively participate in the world, the way we imagine AI systems one day might. Instead, these machines will behave exactly the same way every time they encounter the same situation. This can be very good for ensuring an AI system is trustworthy: You want your autonomous car to be a reliable driver. But it’s bad if we want machines to truly engage with, and respond to, the world. These simplest AI systems won’t ever be bored, or interested, or sad.</p>
<h2>Type II AI: Limited memory</h2>
<p>This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.</p>
<p>These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car. </p>
<p>But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel.</p>
<p>So how can we build AI systems that build full representations, remember their experiences and learn how to handle new situations? Brooks was right in that it is very difficult to do this. My own research into methods inspired by Darwinian evolution can start to <a href="http://dx.doi.org/10.1162/NECO_a_00475">make up for human shortcomings</a> by letting the machines build their own representations.</p>
<h2>Type III AI: Theory of mind</h2>
<p>We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.</p>
<p>Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “<a href="http://dx.doi.org/10.1017/S0140525X00076512">theory of mind</a>” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.</p>
<p>This is crucial to <a href="https://theconversation.com/can-great-apes-read-your-mind-66224">how we humans formed societies</a>, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible. </p>
<p>If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.</p>
<h2>Type IV AI: Self-awareness</h2>
<p>The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it. </p>
<p>This is, in a sense, an extension of the “theory of mind” possessed by Type III artificial intelligences. Consciousness is also called “self-awareness” for a reason. (“I want that item” is a very different statement from “I know I want that item.”) Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others. We assume someone honking behind us in traffic is angry or impatient, because that’s how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.</p>
<p>While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own. And it is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.</p><img src="https://counter.theconversation.com/content/67616/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Arend Hintze works for Michigan State University. He receives funding from NSF and Strength in Numbers Game Company to research AI. </span></em></p>We need to do more than teach machines to learn. We need to overcome the barriers that separate machines from us – and us from them.Arend Hintze, Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.