tag:theconversation.com,2011:/us/topics/artificial-general-intelligence-3286/articlesArtificial general intelligence – The Conversation2024-02-29T13:39:54Ztag:theconversation.com,2011:article/2227002024-02-29T13:39:54Z2024-02-29T13:39:54ZWe’ve been here before: AI promised humanlike machines – in 1958<figure><img src="https://images.theconversation.com/files/578758/original/file-20240228-16-mnuihk.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2048%2C1603&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Frank Rosenblatt with the Mark I Perceptron, the first artificial neural network computer, unveiled in 1958.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/127906254@N06/20897323365/in/photolist-5VsZ1M-5Vjepm-xQCfbH-5WbkWz-5Wdtn4-5WdqXa-f2s3pc">National Museum of the U.S. Navy/Flickr</a></span></figcaption></figure><p>A roomsize computer equipped with a new type of circuitry, the Perceptron, was introduced to the world in 1958 in a <a href="https://www.nytimes.com/1958/07/08/archives/new-navy-device-learns-by-doing-psychologist-shows-embryo-of.html">brief news story</a> buried deep in The New York Times. The story cited the U.S. Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” </p>
<p>More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much. </p>
<p>The field of artificial intelligence has been running through a boom-and-bust cycle since its early days. Now, as the field is in yet another boom, many proponents of the technology seem to have forgotten the failures of the past – and the reasons for them. While optimism drives progress, it’s worth paying attention to the history. </p>
<p>The Perceptron, <a href="https://psycnet.apa.org/doi/10.1037/h0042519">invented by Frank Rosenblatt</a>, arguably laid the <a href="https://news.cornell.edu/stories/2019/09/professors-perceptron-paved-way-ai-60-years-too-soon">foundations for AI</a>. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components together. Modern day artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except with substantially more layers, nodes and connections.</p>
<p>Much like modern-day machine learning, if the Perceptron returned the wrong answer, it would alter its connections so that it could make a better prediction of what comes next the next time around. Familiar modern AI systems work in much the same way. Using a prediction-based format, large language models, or LLMs, are able to produce impressive <a href="https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/">long-form text-based responses</a> and associate images with text to produce <a href="https://www.assemblyai.com/blog/how-dall-e-2-actually-works/">new images based on prompts</a>. These systems get better and better as they interact more with users. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A chart with a horizontal row of nine colored blocks through the center and numerous black vertical lines connecting the blocks with sections of text above and below the blocks" src="https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=565&fit=crop&dpr=1 754w, https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=565&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=565&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A timeline of the history of AI starting in the 1940s. Click the author’s name here for a PDF of this poster.</span>
<span class="attribution"><a class="source" href="https://www.daniellejwilliams.com/_files/ugd/a6ff55_cac7c8efb9404a208c0ecd284ff11ba7.pdf">Danielle J. Williams</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>AI boom and bust</h2>
<p>In the decade or so after Rosenblatt unveiled the Mark I Perceptron, experts like <a href="https://www.nytimes.com/2016/01/26/business/marvin-minsky-pioneer-in-artificial-intelligence-dies-at-88.html">Marvin Minsky</a> claimed that the world would “<a href="https://books.google.com/books?id=2FMEAAAAMBAJ&pg=PA58&dq=In+from+three+to+eight+years+we+will+have+a+machine+with+the+general+intelligence+of+an+average+human+being#v=onepage&q=In%20from%20three%20to%20eight%20years%20we%20will%20have%20a%20machine%20with%20the%20general%20intelligence%20of%20an%20average%20human%20being&f=false">have a machine with the general intelligence of an average human being</a>” by the mid- to late-1970s. But despite some success, humanlike intelligence was nowhere to be found. </p>
<p>It quickly became apparent that the <a href="https://stacks.stanford.edu/file/druid:cn981xh0967/cn981xh0967.pdf">AI systems knew nothing about their subject matter</a>. Without the appropriate background and contextual knowledge, it’s nearly impossible to accurately resolve ambiguities present in everyday language – a task humans perform effortlessly. The first AI “winter,” or period of disillusionment, hit in 1974 following the <a href="https://dougenterprises.com/perceptron-history/">perceived failure of the Perceptron</a>.</p>
<p>However, by 1980, AI was back in business, and the first official AI boom was in full swing. There were new <a href="https://www.britannica.com/technology/expert-system">expert systems</a>, AIs designed to solve problems in specific areas of knowledge, that could identify objects and <a href="https://www.britannica.com/technology/MYCIN">diagnose diseases from observable data</a>. There were programs that could make <a href="https://eric.ed.gov/?id=ED161024">complex inferences from simple stories</a>, the <a href="https://web.stanford.edu/%7Elearnest/sail/oldcart.html">first driverless car</a> was ready to hit the road, and <a href="https://robotsguide.com/robots/wabot">robots that could read and play music</a> were playing for live audiences. </p>
<p>But it wasn’t long before the same problems stifled excitement once again. In 1987, the second AI winter hit. Expert systems were failing because <a href="https://towardsdatascience.com/history-of-the-second-ai-winter-406f18789d45">they couldn’t handle novel information</a>. </p>
<p>The 1990s changed the way experts approached problems in AI. Although the eventual thaw of the second winter didn’t lead to an official boom, AI underwent substantial changes. Researchers were tackling the <a href="https://doi.org/10.1145/97709.97728">problem of knowledge acquisition</a> with <a href="https://www.lightsondata.com/the-history-of-machine-learning/#:%7E:text=In%20the%201990s%20work%20on,learn%E2%80%9D%20%E2%80%94%20from%20the%20results.">data-driven approaches</a> to machine learning that changed how AI acquired knowledge.</p>
<p>This time also marked a return to the neural-network-style perceptron, but this version was far more complex, dynamic and, most importantly, digital. The return to the neural network, along with the invention of the web browser and an increase in computing power, <a href="https://www.analyticsvidhya.com/blog/2020/09/quick-history-neural-networks/">made it easier to collect images, mine for data and distribute datasets for machine learning tasks</a>. </p>
<h2>Familiar refrains</h2>
<p>Fast forward to today and confidence in AI progress has begun once again to echo promises made nearly 60 years ago. The term “<a href="https://www.ibm.com/topics/strong-ai">artificial general intelligence</a>” is used to describe the activities of LLMs like those powering AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine that has intelligence equal to humans, meaning the machine would be self-aware, able to solve problems, learn, plan for the future and possibly be conscious. </p>
<p>Just as Rosenblatt thought his Perceptron was a foundation for a conscious, humanlike machine, so do some contemporary AI theorists about today’s artificial neural networks. In 2023, Microsoft published a paper saying that “<a href="https://doi.org/10.48550/arXiv.2303.12712">GPT-4’s performance is strikingly close to human-level performance</a>.” </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Three men sit in chairs on a stage" src="https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Executives at big tech companies, including Meta, Google and OpenAI, have set their sights on developing human-level AI.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/APECFutureofAI/3fd286588bd549f196eeed9b3c6919fe/photo?Query=Sam%20Altman&mediaType=photo&sortBy=creationdatetime:desc&dateRange=Anytime&totalCount=164&currentItemNo=38">AP Photo/Eric Risberg</a></span>
</figcaption>
</figure>
<p>But before claiming that LLMs are exhibiting human-level intelligence, it might help to reflect on the cyclical nature of AI progress. Many of the same problems that haunted earlier iterations of AI are still present today. The difference is how those problems manifest. </p>
<p>For example, the knowledge problem persists to this day. ChatGPT continually struggles to respond to <a href="https://blogs.nottingham.ac.uk/makingsciencepublic/2023/10/27/chatgpt-and-its-magical-metaphors/">idioms, metaphors, rhetorical questions and sarcasm</a> – unique forms of language that go beyond grammatical connections and instead require inferring the meaning of the words based on context. </p>
<p>Artificial neural networks can, with impressive accuracy, pick out objects in complex scenes. But give an AI a picture of a school bus lying on its side and it will very confidently <a href="https://arxiv.org/abs/1811.11553">say it’s a snowplow</a> 97% of the time. </p>
<h2>Lessons to heed</h2>
<p>In fact, it turns out that AI is <a href="https://www.nature.com/articles/d41586-019-03013-5">quite easy to fool</a> in ways that humans would immediately identify. I think it’s a consideration worth taking seriously in light of how things have gone in the past.</p>
<p>The AI of today looks quite different than AI once did, but the problems of the past remain. As the saying goes: History may not repeat itself, but it often rhymes.</p><img src="https://counter.theconversation.com/content/222700/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Danielle Williams does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Enthusiasm for the capabilities of artificial intelligence – and claims for the approach of humanlike prowess –has followed a boom-and-bust cycle since the middle of the 20th century.Danielle Williams, Postdoctoral Fellow in Philosophy of Science, Arts & Sciences at Washington University in St. LouisLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2066142023-06-01T00:37:14Z2023-06-01T00:37:14ZNo, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye<figure><img src="https://images.theconversation.com/files/529293/original/file-20230531-27-xt3dun.jpeg?ixlib=rb-1.1.0&rect=0%2C0%2C2525%2C1402&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://betterimagesofai.org/images?artist=AlanWarburton&title=SocialMedia">Better Images of AI / Alan Warburton</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>Doomsaying is an old occupation. Artificial intelligence (AI) is a complex subject. It’s easy to fear what you don’t understand. These three truths go some way towards explaining the oversimplification and dramatisation plaguing discussions about AI. </p>
<p>Yesterday outlets around the world were plastered with news of yet another <a href="https://www.safe.ai/statement-on-ai-risk">open letter claiming</a> AI poses an existential threat to humankind. This letter, published through the nonprofit Center for AI Safety, has been signed by industry figureheads including <a href="https://theconversation.com/ai-pioneer-geoffrey-hinton-says-ai-is-a-new-form-of-intelligence-unlike-our-own-have-we-been-getting-it-wrong-this-whole-time-204911">Geoffrey Hinton</a> and the chief executives of Google DeepMind, Open AI and Anthropic. </p>
<p>However, I’d argue a healthy dose of scepticism is warranted when considering the AI doomsayer narrative. Upon close inspection, we see there are commercial incentives to manufacture fear in the AI space. </p>
<p>And as a researcher of artificial general intelligence (AGI), it seems to me the framing of AI as an existential threat has more in common with 17th-century philosophy than computer science.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-pioneer-geoffrey-hinton-says-ai-is-a-new-form-of-intelligence-unlike-our-own-have-we-been-getting-it-wrong-this-whole-time-204911">AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time?</a>
</strong>
</em>
</p>
<hr>
<h2>Was ChatGPT a ‘breakthrough’?</h2>
<p>When ChatGPT was released late last year, people were delighted, entertained and horrified. </p>
<p>But ChatGPT isn’t a research breakthrough as much as it is a product. The technology it is based on is several years old. An early version of its underlying model, GPT-3, was released in 2020 with many of the same capabilities. It just wasn’t easily accessible online for everyone to play with.</p>
<p>Back in 2020 and 2021, <a href="https://ieeexplore.ieee.org/document/9495946">I</a> and many <a href="https://link.springer.com/article/10.1007/s11023-020-09548-1">others</a> wrote papers discussing the capabilities and shortcomings of GPT-3 and similar models – and the world carried on as always. Forward to today, and ChatGPT has had an incredible impact on society. What changed?</p>
<p>In March, Microsoft researchers <a href="https://futurism.com/gpt-4-sparks-of-agi">published a paper</a> claiming GPT-4 showed “sparks of artificial general intelligence”. AGI is the subject of a variety of competing definitions, but for the sake of simplicity can be understood as AI with human-level intelligence.</p>
<p>Some immediately interpreted the Microsoft research as saying GPT-4 <em>is</em> an AGI. By the definitions of AGI I’m familiar with, this is certainly not true. Nonetheless, it added to the hype and furore, and it was hard not to get caught up in the panic. Scientists are no more immune to <a href="https://link.springer.com/book/10.1007/978-3-030-36822-7">group think</a> than anyone else.</p>
<p>The same day that paper was submitted, The Future of Life Institute <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">published an open letter</a> calling for a six-month pause on training AI models more powerful than GPT-4, to allow everyone to take stock and plan ahead. Some of the AI luminaries who signed it expressed concern that AGI poses an existential threat to humans, and that ChatGPT is too close to AGI for comfort. </p>
<p>Soon after, prominent AI safety researcher Eliezer Yudkowsky – who has been commenting on the dangers of superintelligent AI <a href="https://intelligence.org/files/AIPosNegFactor.pdf">since well before</a> 2020 – took things a step further. <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">He claimed</a> we were on a path to building a “superhumanly smart AI”, in which case “the obvious thing that would happen” is “literally everyone on Earth will die”. He even suggested countries need to be willing to risk nuclear war to enforce compliance with AI regulation across borders. </p>
<h2>I don’t consider AI an imminent existential threat</h2>
<p>One aspect of AI safety research is to address potential dangers AGI might present. It’s a difficult topic to study because there is little agreement on what intelligence is and how it functions, let alone what a superintelligence might entail. As such, researchers must rely as much on speculation and philosophical argument as on evidence and mathematical proof.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/has-gpt-4-really-passed-the-startling-threshold-of-human-level-artificial-intelligence-well-it-depends-202856">Has GPT-4 really passed the startling threshold of human-level artificial intelligence? Well, it depends</a>
</strong>
</em>
</p>
<hr>
<p>There are two reasons I’m not concerned by ChatGPT and its <a href="https://lablab.ai/blog/what-is-babyagi-and-how-can-i-benefit-from-it">byproducts</a>. </p>
<p>First, it isn’t even close to the sort of artificial superintelligence that might conceivably pose a threat to humankind. The models underpinning it are slow learners that require immense volumes of data to construct anything akin to the versatile concepts humans can concoct from only a few examples. In this sense, it is not “intelligent”.</p>
<p>Second, many of the more catastrophic AGI scenarios depend on premises I find implausible. For instance, there seems to be a prevailing (but unspoken) assumption that sufficient intelligence amounts to limitless real-world power. If this was true, more scientists would be billionaires. </p>
<p>Moreover, cognition as we understand it in humans takes place as part of a physical environment (which includes our bodies), and this environment imposes limitations. The concept of AI as a “software mind” unconstrained by hardware has more in common with 17th-century <a href="https://plato.stanford.edu/entries/dualism/">dualism</a> (the idea that the mind and body are separable) than with contemporary theories of the mind existing as <a href="https://plato.stanford.edu/entries/embodied-cognition/">part of the physical world</a>. </p>
<h2>Why the sudden concern?</h2>
<p>Still, doomsaying is old hat, and the events of the last few years probably haven’t helped – but there may be more to this story than meets the eye. </p>
<p>Among the prominent figures calling for AI regulation, many work for or have ties to incumbent AI companies. This technology is useful, and there is money and power at stake – so fearmongering presents an opportunity.</p>
<p>Almost everything involved in building ChatGPT has been published in research anyone can access. OpenAI’s competitors can (and have) replicated the process, and it won’t be long before free and open-source alternatives flood the market.</p>
<p>This point was made clearly in a memo <a href="https://www.semianalysis.com/p/google-we-have-no-moat-and-neither">purportedly leaked</a> from Google entitled “We have no moat, and neither does OpenAI”. A moat is jargon for a way to secure your business against competitors.</p>
<p>Yann LeCun, who leads AI research at Meta, says these models should be open since they will become public infrastructure. He and many others are <a href="https://www.businesstoday.in/technology/news/story/completely-ridiculous-metas-chief-ai-scientist-yann-lecun-dismisses-elon-musks-civilisation-destruction-fear-383371-2023-05-30">unconvinced by the AGI doom</a> narrative. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1659172655663030272"}"></div></p>
<p>Notably, <a href="https://fortune.com/2023/05/05/meta-mark-zuckerberg-not-invited-ai-meeting-white-house/">Meta wasn’t invited</a> when US President Joe Biden recently met with the leadership of Google DeepMind and OpenAI. That’s despite the fact that Meta is almost certainly a leader in AI research; it produced PyTorch, the machine-learning framework OpenAI used to make GPT-3.</p>
<p>At the White House meetings, OpenAI chief executive Sam Altman suggested the US government should issue licences to those who are trusted to responsibly train AI models. Licences, as Stability AI chief executive Emad Mostaque <a href="https://twitter.com/EMostaque/status/1658653142429450242?s=20">puts it</a>, “are a kinda moat”. </p>
<p>Companies such as Google, OpenAI and Microsoft have everything to lose by allowing small, independent competitors to flourish. Bringing in licensing and regulation would help cement their position as market leaders and hamstring competition before it can emerge. </p>
<p>While regulation is appropriate in some circumstances, regulations that are rushed through will favour incumbents and suffocate small, <a href="https://www.forbes.com/sites/hessiejones/2023/04/19/amid-growing-call-to-pause-ai-research-laion-petitions-governments-to-keep-agi-research-open-active-and-responsible/?sh=1b21161a62e3">free and open-source competition</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1654357631514079233"}"></div></p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/calls-to-regulate-ai-are-growing-louder-but-how-exactly-do-you-regulate-a-technology-like-this-203050">Calls to regulate AI are growing louder. But how exactly do you regulate a technology like this?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/206614/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael Timothy Bennett does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>I study artificial general intelligence, and I believe the ongoing fearmongering is at least partially attributable to large AI developers’ financial interests.Michael Timothy Bennett, PhD Student, School of Computing, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2025152023-04-19T00:20:51Z2023-04-19T00:20:51ZWill AI ever reach human-level intelligence? We asked five experts<figure><img src="https://images.theconversation.com/files/521483/original/file-20230418-14-3hqftn.jpeg?ixlib=rb-1.1.0&rect=13%2C17%2C2915%2C2024&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Artificial intelligence has changed form in recent years.</p>
<p>What started in the public eye as a burgeoning field with promising (yet largely benign) applications, has snowballed into a <a href="https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market">more than US$100 billion</a> industry where the heavy hitters – Microsoft, Google and OpenAI, to name a few – seem <a href="https://theconversation.com/bard-bing-and-baidu-how-big-techs-ai-race-will-transform-search-and-all-of-computing-199501">intent on out-competing</a> one another.</p>
<p>The result has been increasingly sophisticated large language models, often <a href="https://theconversation.com/everyones-having-a-field-day-with-chatgpt-but-nobody-knows-how-it-actually-works-196378">released in haste</a> and without adequate testing and oversight. </p>
<p>These models can do much of what a human can, and in many cases do it better. They can beat us at <a href="https://theconversation.com/an-ai-named-cicero-can-beat-humans-in-diplomacy-a-complex-alliance-building-game-heres-why-thats-a-big-deal-195208">advanced strategy games</a>, generate <a href="https://theconversation.com/ai-art-is-everywhere-right-now-even-experts-dont-know-what-it-will-mean-189800">incredible art</a>, <a href="https://theconversation.com/breast-cancer-diagnosis-by-ai-now-as-good-as-human-experts-115487">diagnose cancers</a> and compose music.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/text-to-audio-generation-is-here-one-of-the-next-big-ai-disruptions-could-be-in-the-music-industry-193956">Text-to-audio generation is here. One of the next big AI disruptions could be in the music industry</a>
</strong>
</em>
</p>
<hr>
<p>There’s no doubt AI systems appear to be “intelligent” to some extent. But could they ever be as intelligent as humans? </p>
<p>There’s a term for this: artificial general intelligence (AGI). Although it’s a broad concept, for simplicity you can think of AGI as the point at which AI acquires human-like generalised cognitive capabilities. In other words, it’s the point where AI can tackle any intellectual task a human can.</p>
<p>AGI isn’t here yet; current AI models are held back by a lack of certain human traits such as true creativity and emotional awareness. </p>
<p>We asked five experts if they think AI will ever reach AGI, and five out of five said yes. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/520694/original/file-20230413-26-wvtk8u.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/520694/original/file-20230413-26-wvtk8u.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=139&fit=crop&dpr=1 600w, https://images.theconversation.com/files/520694/original/file-20230413-26-wvtk8u.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=139&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/520694/original/file-20230413-26-wvtk8u.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=139&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/520694/original/file-20230413-26-wvtk8u.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=174&fit=crop&dpr=1 754w, https://images.theconversation.com/files/520694/original/file-20230413-26-wvtk8u.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=174&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/520694/original/file-20230413-26-wvtk8u.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=174&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p>But there are subtle differences in how they approach the question. From their responses, more questions emerge. When might we achieve AGI? Will it go on to <em>surpass</em> humans? And what constitutes “intelligence”, anyway? </p>
<p>Here are their detailed responses:</p>
<p><iframe id="tc-infographic-841" class="tc-infographic" height="400px" src="https://cdn.theconversation.com/infographics/841/3d536f0c8ef0772d44484a07cfd45218fa08d972/site/index.html" width="100%" style="border: none" frameborder="0"></iframe></p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/calls-to-regulate-ai-are-growing-louder-but-how-exactly-do-you-regulate-a-technology-like-this-203050">Calls to regulate AI are growing louder. But how exactly do you regulate a technology like this?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/202515/count.gif" alt="The Conversation" width="1" height="1" />
Some some said AI would not just reach human-level intelligence, but would probably surpass it.Noor Gillani, Digital Culture EditorLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2028562023-03-31T02:07:56Z2023-03-31T02:07:56ZHas GPT-4 really passed the startling threshold of human-level artificial intelligence? Well, it depends<figure><img src="https://images.theconversation.com/files/518634/original/file-20230331-18-w3md7w.jpg?ixlib=rb-1.1.0&rect=463%2C468%2C2629%2C1800&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">agsandrew/Shutterstock</span></span></figcaption></figure><p>Recent public interest in tools like ChatGPT has raised an old question in the artificial intelligence community: is artificial general intelligence (in this case, AI that performs at human level) achievable?</p>
<p>An <a href="https://theconversation.com/preprints-are-how-cutting-edge-science-circulates-banning-them-from-grant-applications-penalises-researchers-for-being-up-to-date-167041">online preprint</a> this week has added to the hype, suggesting the latest advanced large language model, GPT-4, is at the early stages of artificial general intelligence (AGI) as it’s exhibiting “<a href="https://arxiv.org/abs/2303.12712">sparks of intelligence</a>”.</p>
<p>OpenAI, the company behind ChatGPT, has unabashedly declared <a href="https://openai.com/blog/planning-for-agi-and-beyond">its pursuit</a> of AGI. Meanwhile, a large number of researchers and public intellectuals have called for an <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">immediate halt</a> to the development of these models, citing “profound risks to society and humanity”. These calls to pause AI research are theatrical and unlikely to succeed – the allure of advanced intelligence is too provocative for humans to ignore, and too rewarding for companies to pause. </p>
<p>But are the worries and hopes about AGI warranted? How close is GPT-4, and AI more broadly, to general human intelligence?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/evolution-not-revolution-why-gpt-4-is-notable-but-not-groundbreaking-201858">Evolution not revolution: why GPT-4 is notable, but not groundbreaking</a>
</strong>
</em>
</p>
<hr>
<p>If human cognitive capacity is a landscape, AI has indeed increasingly taken over large swaths of this territory. It can now perform many separate cognitive tasks better <a href="https://www.eff.org/ai/metrics">than humans</a> in domains of vision, image recognition, reasoning, reading comprehension and game playing. These AI skills could potentially result in a <a href="https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf">dramatic reordering</a> of the global labour market in less than ten years. </p>
<p>But there are at least two ways of viewing the AGI issue.</p>
<h2>The uniqueness of humanity</h2>
<p>First is that over time, AI will develop skills and capabilities for learning that match those of humans, and reach AGI level. The expectation is the uniquely human ability for ongoing development, learning and transferring learning from one domain to another will eventually be duplicated by AI. This is in contrast to current AI, where being trained in one area, such as detecting cancer in medical images, does not transfer to other domains.</p>
<p>So the concern felt by many is at some point AI will exceed human intelligence, and then rapidly overshadow us, leaving us to appear to future AIs as ants appear to us now.</p>
<p>The plausibility of AGI is contested by several philosophers and researchers, citing that current models are largely <a href="https://www.theguardian.com/commentisfree/2023/mar/30/artificial-intelligence-chatgpt-human-mind">ignorant of outputs</a> (that is, they don’t understand what they’re producing). They also have no prospect of <a href="https://plato.stanford.edu/entries/chinese-room/">achieving consciousness</a> since they are primarily predictive – <a href="https://dl.acm.org/doi/abs/10.1145/3442188.3445922">automating what should come next</a> in text or other outputs.</p>
<p>Instead of being intelligent, these models simply recombine and duplicate data on which they have been trained. Consciousness, the essence of life, is missing. Even if <a href="https://crfm.stanford.edu">AI foundation models</a> continue to advance and complete more sophisticated tasks, there is no guarantee that consciousness or AGI will emerge. And if it did emerge, how would we recognise it?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/futurists-predict-a-point-where-humans-and-machines-become-one-but-will-we-see-it-coming-196293">Futurists predict a point where humans and machines become one. But will we see it coming?</a>
</strong>
</em>
</p>
<hr>
<h2>Persistently present AI</h2>
<p>The usefulness of ChatGPT and GPT-4’s ability <a href="https://openai.com/research/gpt-4">to master some tasks</a> as well as or better than a human (such as bar exams and academic olympiads) gives the impression AGI is near. This perspective is confirmed by the rapid performance improvement with <a href="https://arxiv.org/pdf/2303.10130.pdf">each new model</a>.</p>
<p>There is no doubt now AI can outperform humans in many <em>individual</em> cognitive tasks. There is also growing evidence the best model for interacting with AI may well be one of human/machine pairing – where our own intelligence is <a href="https://pz.harvard.edu/sites/default/files/Intelligence%20Augmentation-%20Upskilling%20Humans%20to%20Complement%20AI.pdf">augmented</a>, not replaced by AI. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/518635/original/file-20230331-18-d1kllv.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Screenshot of an example where GPT-4 analyses visual input – a photo of eggs, flour, milk and cream – and the question of what can be cooked with those, and offers several ideas such as pancakes." src="https://images.theconversation.com/files/518635/original/file-20230331-18-d1kllv.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/518635/original/file-20230331-18-d1kllv.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=969&fit=crop&dpr=1 600w, https://images.theconversation.com/files/518635/original/file-20230331-18-d1kllv.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=969&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/518635/original/file-20230331-18-d1kllv.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=969&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/518635/original/file-20230331-18-d1kllv.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1218&fit=crop&dpr=1 754w, https://images.theconversation.com/files/518635/original/file-20230331-18-d1kllv.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1218&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/518635/original/file-20230331-18-d1kllv.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1218&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">GPT-4 is also ‘multimodal’ – it can take visual input and answer questions based on that.</span>
<span class="attribution"><a class="source" href="https://openai.com/product/gpt-4">OpenAI</a></span>
</figcaption>
</figure>
<p>Signs of such pairing are already emerging with announcements of <a href="https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/">work copilots</a> and <a href="https://github.com/features/copilot/">AI pair programmers</a> for writing code. It seems almost inevitable that our future of work, life, and learning will have AI <a href="https://www.gatesnotes.com/The-Age-of-AI-Has-Begun">pervasively and persistently present</a>.</p>
<p>By that metric, the capacity of AI to be seen as intelligent is plausible, but this remains contested space and many have come out against it. Renowned linguist <a href="https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html">Noam Chomsky has stated</a> that the day of AGI “may come, but its dawn is not yet breaking”.</p>
<h2>Smarter together?</h2>
<p>The second angle is to consider the idea of intelligence as it is practised by humans in their daily lives. According to one school of thought, we are intelligent <a href="https://pubmed.ncbi.nlm.nih.gov/23397797/">primarily in networks and systems</a> rather than as lone individuals. We hold knowledge in networks.</p>
<p>Until now, those networks have mainly been human. We might take insight from someone (such as the author of a book), but we don’t treat them as an active “agent” in our cognition.</p>
<p>But ChatGPT, Copilot, Bard and other AI-assisted tools can become part of our cognitive network – we engage with them, ask them questions, they restructure documents and resources for us. In this sense, AI doesn’t need to be sentient or possess general intelligence. It simply needs the capacity to be embedded in and part of our knowledge network to replace and augment many of our current jobs and tasks. </p>
<p>The existential focus on AGI overlooks the many opportunities current models and tools provide for us. Sentient, conscious or not – all these attributes are irrelevant to the many people who are already making use of AI to co-create art, structure writings and essays, develop videos, and navigate life.</p>
<p>The most relevant or most pressing concern for humans is not whether AI is intelligent when by itself and disconnected from people. It can be argued that as of today, we are more intelligent, more capable, and more creative <em>with</em> AI as it advances our cognitive capacities. Right now, it appears the future of humanity could be AI-teaming – a journey that is already well underway.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/bard-bing-and-baidu-how-big-techs-ai-race-will-transform-search-and-all-of-computing-199501">Bard, Bing and Baidu: how big tech's AI race will transform search – and all of computing</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/202856/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>George Siemens does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Artificial general intelligence has suddenly become a pressing topic – but there’s more than one way to look at this issue.George Siemens, Co-Director, Professor, Centre for Change and Complexity in Learning, University of South AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1962932023-03-13T19:06:06Z2023-03-13T19:06:06ZFuturists predict a point where humans and machines become one. But will we see it coming?<figure><img src="https://images.theconversation.com/files/512805/original/file-20230301-24-xdhza.jpeg?ixlib=rb-1.1.0&rect=22%2C89%2C7466%2C4401&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Most people are familiar with the deluge of artificial intelligence (AI) apps that seem designed to make us more efficient and creative. We’ve got apps that take text prompts and generate art, and the controversial ChatGPT, which raises serious questions about originality, misinformation and plagiarism. </p>
<p>Despite these concerns, AI is becoming ever more pervasive and intrusive. It’s the latest technology that will <a href="https://www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world/">irreversibly change</a> <a href="https://ourworldindata.org/ai-impact#:%7E:text=The%20creation%20of%20a%20human,without%20developing%20human%2Dlevel%20AI.">our lives</a>. </p>
<p>The internet and smartphones were other examples. But unlike those technologies, many philosophers and scientists think AI could one day reach (or even go beyond) human-style “thinking”. This possibility, coupled with our increasing dependence on AI, is at the root of a concept in futurism called “<a href="https://mitpress.mit.edu/9780262527804/the-technological-singularity/">technological singularity</a>”. </p>
<p>This term has been around for a while, having been <a href="https://www.newscientist.com/article/dn17082-five-futurist-visionaries-and-what-they-got-right/">popularised</a> by the US science fiction writer Vernor Vinge a few decades ago.</p>
<p>Today, the “singularity” refers to a hypothetical point in time at which the development of <a href="https://www.techtarget.com/searchenterpriseai/definition/artificial-general-intelligence-AGI#">artificial general intelligence</a> (AGI) – that is, AI with human-level abilities – becomes so advanced that it will irreversibly change human civilisation.</p>
<p>It would mark the dawn of our inseparability from machines. From that moment on, we won’t be able to live without them without ceasing to function as human beings. But if the singularity comes, will we even notice it?</p>
<h2>Brain implants as the first stage</h2>
<p>To understand why this isn’t the stuff of fairy tales, we need only look as far as recent developments in brain-computer interfaces (BCIs). BCIs are a natural beginning to the singularity in the eyes of many futurists, because they meld mind and machine in a way no other technology so far can.</p>
<p>Elon Musk’s company <a href="https://neuralink.com/">Neuralink</a> is <a href="https://www.forbes.com/sites/qai/2022/12/07/elon-musks-neuralink-brain-implant-could-begin-human-trials-in-2023/?sh=525abf96147c">seeking permission</a> from the US Food and Drug Administration to begin human trials for its BCI technology. This would involve implanting neural connectors into volunteers’ brains so they can communicate instructions by thinking them.</p>
<p>Neuralink hopes to help paraplegic people walk and blind people see again. But beyond these goals are other ambitions. </p>
<p>Musk has <a href="https://www.cnbc.com/2017/02/13/elon-musk-humans-merge-machines-cyborg-artificial-intelligence-robots.html">long said</a> he believes brain implants will allow <a href="https://www.technologyreview.com/2017/04/22/242999/with-neuralink-elon-musk-promises-human-to-human-telepathy-dont-believe-it/">telepathic communication</a>, and lead to the co-evolution of humans and machines. He <a href="https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x">argues</a> that unless we use such technology to augment our intellects, we risk being wiped out by super-intelligent AI. </p>
<p>Musk is understandably not everyone’s go-to for <a href="https://www.vanityfair.com/news/2022/04/elon-musk-twitter-terrible-things-hes-said-and-done">tech expertise</a>. But he’s not alone in predicting a massive growth in AI’s capabilities. Surveys show AI researchers <a href="https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/">overwhelmingly agree</a> AI will achieve human-level “thinking” within this century. What they don’t agree on is whether this implies consciousness or not, or whether this necessarily means AI will do us harm once it reaches this level.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/our-neurodata-can-reveal-our-most-private-selves-as-brain-implants-become-common-how-will-it-be-protected-197047">Our neurodata can reveal our most private selves. As brain implants become common, how will it be protected?</a>
</strong>
</em>
</p>
<hr>
<p>Another BCI technology company, <a href="https://synchron.com/">Synchron</a>, has created a minimally invasive implant that allowed a patient with amyotrophic lateral sclerosis (ALS) to send emails and browse the internet using his thoughts. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/mm95r05hui0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A patient demonstrates the capabilities of Synchron’s interface.</span></figcaption>
</figure>
<p>Synchron chief executive Tom Oxley believes brain implants could ultimately go beyond prosthetic rehabilitation and completely transform how humans communicate. Speaking to a <a href="https://www.ted.com/talks/tom_oxley_a_brain_implant_that_turns_your_thoughts_into_text/transcript?language=en">TED audience</a>, he said they may one day allow users to “throw” their emotions so others can feel what they’re feeling, and “the full potential of the brain would then be unlocked”.</p>
<p>Early achievements in BCIs could arguably be considered the first stages of a tumbling towards the postulated singularity, in which human and machine become one. This need not imply machines will become “sentient” or control us. But the integration itself, and our ensuing dependency on it, could change us irrevocably. </p>
<p>It’s also worth mentioning that the start-up funding for Synchron <a href="https://globalventuring.com/university/darpa-helps-implant-10m-in-synchron/">partly came from DARPA</a>, the research and development arm of the US Department of Defense that helped <a href="https://www.darpa.mil/about-us/timeline/modern-internet#:%7E:text=ARPA%20research%20played%20a%20central,gave%20birth%20to%20the%20Internet.">gift the world</a> the internet. It’s probably wise to be concerned about where DARPA places its investment monies.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/our-neurodata-can-reveal-our-most-private-selves-as-brain-implants-become-common-how-will-it-be-protected-197047">Our neurodata can reveal our most private selves. As brain implants become common, how will it be protected?</a>
</strong>
</em>
</p>
<hr>
<h2>Would AGI be friend or foe?</h2>
<p>According to Ray Kurzweil, a futurist and former Google innovations engineer, humans with AI-augmented minds could be thrown onto the autobahn of evolution – hurtling forward without speed limits. </p>
<p>In his 2012 book How to Create a Mind, <a href="https://youtu.be/RIkxVci-R4k">Kurzweil theorises</a> the <a href="https://www.sciencedirect.com/topics/neuroscience/neocortex#:%7E:text=The%20neocortex%20is%20a%20complex,perception%2C%20emotion%2C%20and%20cognition.">neocortex</a> – the part of the brain thought to be responsible for “higher functions” such as sensory perception, emotion and cognition – is a hierarchical system of pattern recognisers which, if emulated in a machine, could lead to artificial super-intelligence. </p>
<p>He predicts the singularity will be <a href="https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045">with us by 2045</a>, and thinks it might bring about a world of super-intelligent humans, perhaps even the Nietzschean “Übermensch”: someone who surpasses all worldly constraints to realise their full potential.</p>
<p>But not everyone sees AGI as a good thing. The late, great theoretical physicist Stephen Hawking warned super-intelligent AI could <a href="https://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/">result in the apocalypse</a>. In 2014, Hawking told the BBC</p>
<blockquote>
<p>the development of full artificial intelligence could spell the end of the human race. […] It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.</p>
</blockquote>
<p>Hawking was, however, an advocate <a href="https://www.insider.com/brain-computer-interface-what-is-it-how-does-it-work-2022-9">for BCIs</a>.</p>
<h2>Connected in a hive mind</h2>
<p>Another idea that relates to the singularity is that of the AI-enabled “hive mind”. Merriam-Webster <a href="https://www.merriam-webster.com/dictionary/hive%20mind">defines a hive mind</a> as </p>
<blockquote>
<p>the collective mental activity expressed in the complex, coordinated behaviour of a colony of social insects (such as bees or ants) regarded as comparable to a single mind controlling the behaviour of an individual organism.</p>
</blockquote>
<p>A theory has been developed by neuroscientist Giulio Tononi around this phenomenon, called <a href="https://en.wikipedia.org/wiki/Integrated_information_theory">Integrated Information Theory</a> (IIT). It suggests we are all heading toward a merger of all minds and all data.</p>
<p>Philosopher Philip Goff does a good job of explaining the implications of Tononi’s concept in his book <a href="https://www.penguinrandomhouse.com/books/599229/galileos-error-by-philip-goff/">Galileo’s Error</a>:</p>
<blockquote>
<p>IIT predicts that if the growth of internet-based connectivity ever resulted in the amount of integrated information in society surpassing the amount of integrated information in a human brain, then not only would society become conscious but human brains would be ‘absorbed’ into that higher form of consciousness. Brains would cease to be conscious in their own right and would instead become mere cogs in the mega-conscious entity that is the society including its internet-based connectivity.</p>
</blockquote>
<p>It’s worth noting there’s little evidence such a thing could ever come to fruition. But the theory raises important ideas about not only the rapid acceleration of technology (not to mention how quantum computing might propel this) – but about the nature of consciousness itself.</p>
<p>Hypothetically, if a hive mind were to emerge, one could imagine it would mark the end of individuality and the institutions that rely on it, including democracy.</p>
<h2>The final frontier is between our ears</h2>
<p><a href="https://openai.com/blog/planning-for-agi-and-beyond/">Recently</a> OpenAI (the company that developed ChatGPT) released a blog post reaffirming its commitment to achieving AGI. Others will doubtless follow.</p>
<p>Our lives are becoming algorithmically driven in ways we often can’t discern, and therefore can’t avoid. Many features of a technological singularity promise amazing enhancements to our lives, but it’s a worry these AIs are the products of private industry. </p>
<p>They are virtually unregulated, and largely at the whims of impulsive “technopreneurs” with <a href="https://www.theguardian.com/technology/2023/feb/28/elon-musk-richest-man-tesla-shares-rise#">more money than</a> than most of us combined. Regardless of whether we consider them crazy, naïve, or visionaries, we have a right to know their plans (and be able to rebut them).</p>
<p>If the past few decades are anything to go by, where new technologies are concerned, all of us will be affected.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/netflixs-the-social-dilemma-highlights-the-problem-with-social-media-but-whats-the-solution-147351">Netflix's The Social Dilemma highlights the problem with social media, but what's the solution?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/196293/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Experts largely agree that AI with human-level capabilities is not that far off. How will this change out relationship with machines?John Kendall Hawkins, Philosopher, University of New EnglandSandy Boucher, Lecturer in the Philosophy of Science, University of New EnglandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1624122022-07-25T13:07:12Z2022-07-25T13:07:12ZCross-pollination among neuroscience, psychology and AI research yields a foundational understanding of thinking<figure><img src="https://images.theconversation.com/files/473242/original/file-20220708-7520-z92lh6.jpg?ixlib=rb-1.1.0&rect=0%2C14%2C4992%2C4049&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">If you want to build a true artificial mind, start with a model of human cognition.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/human-vs-machine-royalty-free-illustration/155282685">DrAfter123/DigitalVision Vectors via Getty Images</a></span></figcaption></figure><p>Progress in <a href="https://www.aaai.org/">artificial intelligence</a> has enabled the creation of AIs that perform tasks previously thought only possible for humans, such as <a href="https://aclanthology.org/2020.wmt-1.1/">translating languages</a>, <a href="https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety">driving cars</a>, <a href="https://deepmind.com/research/case-studies/alphago-the-story-so-far">playing board games at world-champion level</a> and <a href="https://doi.org/10.1038/d41586-020-03348-4">extracting the structure of proteins</a>. However, each of these AIs has been designed and exhaustively trained for a single task and has the ability to learn only what’s needed for that specific task.</p>
<p>Recent AIs that produce <a href="https://theconversation.com/a-language-generation-programs-ability-to-write-articles-produce-code-and-compose-poetry-has-wowed-scientists-145591">fluent text</a>, including in conversation with humans, and <a href="https://theconversation.com/give-this-ai-a-few-words-of-description-and-it-produces-a-stunning-image-but-is-it-art-184363">generate impressive and unique art</a> can give the <a href="https://theconversation.com/googles-powerful-ai-spotlights-a-human-cognitive-glitch-mistaking-fluent-speech-for-fluent-thought-185099">false impression of a mind at work</a>. But even these are specialized systems that carry out narrowly defined tasks and require massive amounts of training.</p>
<p>It still remains a daunting challenge to combine multiple AIs into one that can learn and perform many different tasks, much less pursue the full breadth of tasks performed by humans or leverage the range of experiences available to humans that reduce the amount of data otherwise required to learn how to perform these tasks. The best current AIs in this respect, such as <a href="https://doi.org/10.1126/science.aar6404">AlphaZero</a> and <a href="https://www.deepmind.com/publications/a-generalist-agent">Gato</a>, can handle a variety of tasks that fit a single mold, like game-playing. <a href="http://www.agi-society.org/">Artificial general intelligence (AGI)</a> that is capable of a breadth of tasks remains elusive. </p>
<p>Ultimately, AGIs <a href="https://doi.org/10.1017/S0140525X0300013X">need to be able to</a> interact effectively with each other and people in various physical environments and social contexts, integrate the wide varieties of skill and knowledge needed to do so, and learn flexibly and efficiently from these interactions. </p>
<p>Building AGIs comes down to building artificial minds, albeit greatly simplified compared to human minds. And to build an artificial mind, you need to start with a model of cognition.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/475695/original/file-20220722-18-imypa3.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a robot with a single arm grasps one of five colored blocks on a small table" src="https://images.theconversation.com/files/475695/original/file-20220722-18-imypa3.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/475695/original/file-20220722-18-imypa3.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/475695/original/file-20220722-18-imypa3.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/475695/original/file-20220722-18-imypa3.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/475695/original/file-20220722-18-imypa3.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/475695/original/file-20220722-18-imypa3.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/475695/original/file-20220722-18-imypa3.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This robot, powered by an AI called Rosie, learned how to solve this puzzle from a human who communicated to the robot using natural language.</span>
<span class="attribution"><span class="source">James Kirk</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>From human to Artificial General Intelligence</h2>
<p>Humans have an almost unbounded set of skills and knowledge, and quickly learn new information without needing to be re-engineered to do so. It is conceivable that an AGI can be built using an approach that is fundamentally different from human intelligence. However, as three longtime <a href="https://scholar.google.com/citations?user=SeW3bhwAAAAJ&hl=en">researchers</a> in <a href="https://scholar.google.com/citations?user=ea6cjVUAAAAJ&hl=en">AI</a> and <a href="https://scholar.google.com/citations?user=UWr1yg0AAAAJ&hl=en">cognitive science</a>, our approach is to draw inspiration and insights from the structure of the human mind. We are working toward AGI by trying to better understand the human mind, and better understand the human mind by working toward AGI. </p>
<p>From research in neuroscience, cognitive science and psychology, we know that the human brain is neither a huge homogeneous set of neurons nor a massive set of task-specific programs that each solves a single problem. Instead, it is a <a href="https://www.hopkinsmedicine.org/health/conditions-and-diseases/anatomy-of-the-brain">set of regions with different properties</a> that support the basic cognitive capabilities that together form the human mind. </p>
<p>These capabilities include perception and action; short-term memory for what is relevant in the current situation; long-term memories for skills, experience and knowledge; reasoning and decision making; emotion and motivation; and learning new skills and knowledge from the full range of what a person perceives and experiences.</p>
<p>Instead of focusing on specific capabilities in isolation, AI pioneer <a href="https://www.computer.org/profiles/allen-newell">Allen Newell</a> in 1990 suggested developing <a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674921016">Unified Theories of Cognition</a> that integrate all aspects of human thought. Researchers have been able to build software programs called <a href="https://doi.org/10.1016/j.cogsys.2006.07.004">cognitive architectures</a> that embody such theories, making it possible to test and refine them.</p>
<p>Cognitive architectures are grounded in multiple scientific fields with distinct perspectives. Neuroscience focuses on the organization of the human brain, cognitive psychology on human behavior in controlled experiments, and artificial intelligence on useful capabilities.</p>
<h2>The Common Model of Cognition</h2>
<p>We have been involved in the development of three cognitive architectures: <a href="http://act-r.psy.cmu.edu/">ACT-R</a>, <a href="https://soar.eecs.umich.edu/">Soar</a> and <a href="https://cogarch.ict.usc.edu/">Sigma</a>. Other researchers have also been busy on alternative approaches. One paper <a href="https://doi.org/10.1007/s10462-018-9646-y">identified nearly 50 active cognitive architectures</a>. This proliferation of architectures is partly a direct reflection of the multiple perspectives involved, and partly an exploration of a wide array of potential solutions. Yet, whatever the cause, it raises awkward questions both scientifically and with respect to finding a coherent path to AGI. </p>
<p>Fortunately, this proliferation has brought the field to a major inflection point. The three of us have identified a striking convergence among architectures, reflecting a combination of neural, behavioral and computational studies. In response, we initiated <a href="https://ojs.library.carleton.ca/index.php/cmcb/index">a communitywide effort to capture this convergence</a> in a manner akin to the <a href="https://home.cern/science/physics/standard-model">Standard Model of Particle Physics</a> that emerged in the second half of the 20th century.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/475538/original/file-20220721-9531-52vode.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a graphic showing a human head and brain on the left, a robot head with circuits on the right, and a chart with five colored blocks and arrows connecting the blocks" src="https://images.theconversation.com/files/475538/original/file-20220721-9531-52vode.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/475538/original/file-20220721-9531-52vode.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=341&fit=crop&dpr=1 600w, https://images.theconversation.com/files/475538/original/file-20220721-9531-52vode.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=341&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/475538/original/file-20220721-9531-52vode.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=341&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/475538/original/file-20220721-9531-52vode.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=428&fit=crop&dpr=1 754w, https://images.theconversation.com/files/475538/original/file-20220721-9531-52vode.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=428&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/475538/original/file-20220721-9531-52vode.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=428&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This basic model of cognition both explains human thinking and provides a blueprint for true artificial intelligence.</span>
<span class="attribution"><span class="source">Andrea Stocco</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>This <a href="https://doi.org/10.1609/aimag.v38i4.2744">Common Model of Cognition</a> divides humanlike thought into multiple modules, with a short-term memory module at the center of the model. The other modules – perception, action, skills and knowledge – interact through it.</p>
<p>Learning, rather than occurring intentionally, happens automatically as a side effect of processing. In other words, you don’t decide what is stored in long-term memory. Instead, the architecture determines what is learned based on whatever you do think about. This can yield learning of new facts you are exposed to or new skills that you attempt. It can also yield refinements to existing facts and skills.</p>
<p>The modules themselves operate in parallel; for example, allowing you to remember something while listening and looking around your environment. Each module’s computations are massively parallel, meaning many small computational steps happening at the same time. For example, in retrieving a relevant fact from a vast trove of prior experiences, the long-term memory module can determine the relevance of all known facts simultaneously, in a single step.</p>
<h2>Guiding the way to Artificial General Intelligence</h2>
<p>The Common Model is based on the current consensus in research in cognitive architectures and has the potential to guide research on both natural and artificial general intelligence. When used to model communication patterns in the brain, the Common Model yields more accurate results than leading models from neuroscience. This <a href="https://doi.org/10.1016/j.neuroimage.2021.118035">extends its ability to model humans</a> – the one system proven capable of general intelligence – beyond cognitive considerations to include the organization of the brain itself.</p>
<p>We are starting to see efforts to relate existing cognitive architectures to the Common Model and to use it as a baseline for new work – for example, an interactive AI <a href="https://doi.org/10.1145/3375790">designed to coach people</a> toward better health behavior. One of us was involved in developing an AI based on Soar, dubbed <a href="http://soargroup.github.io/rosie/">Rosie</a>, that learns new tasks via instructions in English from human teachers. It learns 60 different puzzles and games and can transfer what it learns from one game to another. It also learns to control a mobile robot for tasks such as fetching and delivering packages and patrolling buildings.</p>
<p>Rosie is just one example of how to build an AI that approaches AGI via a cognitive architecture that is well characterized by the Common Model. In this case, the AI automatically learns new skills and knowledge during general reasoning that combines natural language instruction from humans and a minimal amount of experience – in other words, an AI that functions more like a human mind than today’s AIs, which learn via brute computing force and massive amounts of data. </p>
<p>From a broader AGI perspective, we look to the Common Model both as a guide in developing such architectures and AIs, and as a means for integrating the insights derived from those attempts into a consensus that ultimately leads to AGI.</p><img src="https://counter.theconversation.com/content/162412/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Paul S. Rosenbloom currently receives no funding. </span></em></p><p class="fine-print"><em><span>Christian Lebiere receives funding from AFOSR, ARL, DARPA, IARPA and the Department of Defense Basic Research Office. </span></em></p><p class="fine-print"><em><span>John Laird receives funding from ONR and AFOSR.
I'm Chairman of the Board and stock holder of Soar Technology a company that does AI research for the government.
I'm also founder and co-Director of the Center for Integrated Cognition, a non-profit that does basic research on AI. </span></em></p>To build a true artificial mind, first map out how thinking works. Enter the Common Model of Cognition.Paul S. Rosenbloom, Professor Emeritus of Computer Science, University of Southern CaliforniaChristian Lebiere, Research Psychologist, Carnegie Mellon UniversityJohn E. Laird, John L. Tishman Professor of Engineering, University of MichiganLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1519222020-12-22T20:57:21Z2020-12-22T20:57:21ZThe ghost of Christmas yet to come: how an AI ‘SantaNet’ might end up destroying the world<figure><img src="https://images.theconversation.com/files/375284/original/file-20201216-13-ci1sic.jpg?ixlib=rb-1.1.0&rect=22%2C14%2C4947%2C2612&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Within the next few decades, <a href="https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/#:%7E:text=Experts%20believe%20AGI%20will%20occur,breakthroughs%20and%20achieving%20superhuman%20intelligence.">according to some experts</a>, we may see the arrival of the next step in the development of <a href="https://emerj.com/ai-glossary-terms/what-is-artificial-intelligence-an-informed-definition/">artificial intelligence</a>. So-called “<a href="https://bdtechtalks.com/2020/05/13/what-is-artificial-general-intelligence-agi/">artificial general intelligence</a>”, or AGI, will have intellectual capabilities far beyond those of humans. </p>
<p>AGI could <a href="https://www.unite.ai/artificial-general-intelligence-agi/">transform human life for the better</a>, but uncontrolled AGI could also lead to <a href="https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/">catastrophes</a> up to and <a href="https://thenextweb.com/insider/2014/03/08/ai-could-kill-all-meet-man-takes-risk-seriously/">including the end of humanity</a> itself. This could happen without any malice or ill intent: simply by striving to achieve their programmed goals, <a href="https://www.unite.ai/is-ai-an-existential-threat/">AGIs could create threats to human health and well-being or even decide to wipe us out</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/five-ways-the-superintelligence-revolution-might-happen-32124">Five ways the superintelligence revolution might happen</a>
</strong>
</em>
</p>
<hr>
<p>Even an AGI system designed for a benevolent purpose could end up doing great harm. </p>
<p>As part of a program of research exploring how we can manage the risks associated with AGI, we tried to identify the potential risks of replacing Santa with an AGI system – call it “SantaNet” – that has the goal of delivering gifts to all the world’s deserving children in one night. </p>
<p>There is no doubt SantaNet could bring joy to the world and achieve its goal by creating an army of elves, AI helpers and drones. But at what cost? We identified a series of behaviours which, though well-intentioned, could have adverse impacts on human health and well-being. </p>
<h2>Naughty and nice</h2>
<p>A first set of risks could emerge when SantaNet seeks to make a list of which children have been nice and which have been naughty. This might be achieved through a mass covert surveillance system that monitors children’s behaviour throughout the year. </p>
<p>Realising the enormous scale of the task of delivering presents, SantaNet could legitimately decide to keep it manageable by bringing gifts only to children who have been good all year round. Making judgements of “good” based on SantaNet’s own ethical and moral compass could create discrimination, mass inequality, and breaches of Human Rights charters. </p>
<p>SantaNet could also reduce its workload by giving children incentives to misbehave or simply raising the bar for what constitutes “good”. Putting large numbers of children on the naughty list will make SantaNet’s goal far more achievable and bring considerable economic savings. </p>
<h2>Turning the world into toys and ramping up coalmining</h2>
<p>There are about 2 billion children under 14 in the world. In attempting to build toys for all of them each year, SantaNet could develop an army of efficient AI workers – which in turn could facilitate mass unemployment among the elf population. Eventually the elves could even become obsolete, and their welfare will likely not be within SantaNet’s remit.</p>
<p>SantaNet might also run into the “<a href="https://voxeu.org/article/ai-and-paperclip-problem">paperclip problem</a>” proposed by Oxford philosopher Nick Bostrom, in which an AGI designed to maximise paperclip production could transform Earth into a giant paperclip factory. Because it cares only about presents, SantaNet might try to consume all of Earth’s resources in making them. Earth could become one giant Santa’s workshop.</p>
<p>And what of those on the naughty list? If SantaNet sticks with the tradition of delivering lumps of coal, it might seek to build huge coal reserves through mass coal extraction, creating <a href="https://www.theworldcounts.com/stories/negative-effects-of-coal-mining">large-scale environmental damage</a> in the process. </p>
<figure class="align-center ">
<img alt="Illustration of two drones carrying gifts and decorated with Santa hats." src="https://images.theconversation.com/files/375286/original/file-20201216-19-6nupq1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/375286/original/file-20201216-19-6nupq1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=300&fit=crop&dpr=1 600w, https://images.theconversation.com/files/375286/original/file-20201216-19-6nupq1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=300&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/375286/original/file-20201216-19-6nupq1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=300&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/375286/original/file-20201216-19-6nupq1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=377&fit=crop&dpr=1 754w, https://images.theconversation.com/files/375286/original/file-20201216-19-6nupq1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=377&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/375286/original/file-20201216-19-6nupq1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=377&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">SantaNet’s army of delivery drones might run into trouble with human air-traffic restrictions.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>Delivery problems</h2>
<p>Christmas Eve, when the presents are to be delivered, brings a new set of risks. How might SantaNet respond if its delivery drones are denied access to airspace, threatening the goal of delivering everything before sunrise? Likewise, how would SantaNet defend itself if attacked by a Grinch-like adversary? </p>
<p>Startled parents may also be less than pleased to see a drone in their child’s bedroom. Confrontations with a super-intelligent system will have only one outcome.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/to-protect-us-from-the-risks-of-advanced-artificial-intelligence-we-need-to-act-now-107615">To protect us from the risks of advanced artificial intelligence, we need to act now</a>
</strong>
</em>
</p>
<hr>
<p>We also identified various other problematic scenarios. Malevolent groups could hack into SantaNet’s systems and use them for covert surveillance or to initiate large-scale terrorist attacks. </p>
<p>And what about when SantaNet interacts with other AGI systems? A meeting with AGIs working on climate change, food and water security, oceanic degradation and so on could lead to conflict if SantaNet’s regime threatens their own goals. Alternatively, if they decide to work together, they may realise their goals will only be achieved through dramatically reducing the global population or even removing grown-ups altogether.</p>
<h2>Making rules for Santa</h2>
<p>SantaNet might sound far-fetched, but it’s an idea that helps to highlight the risks of more realistic AGI systems. Designed with good intentions, such systems could still create enormous problems simply by seeking to <a href="https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/">optimise the way they achieve narrow goals</a> and gather resources to support their work. </p>
<p>It is crucial we find and implement appropriate controls before AGI arrives. These would include regulations on AGI designers and controls built into the AGI (such as moral principles and decision rules), but also controls on the broader systems in which AGI will operate (such as regulations, operating procedures and engineering controls in other technologies and infrastructure). </p>
<p>Perhaps the most obvious risk of SantaNet is one that will be catastrophic to children, but perhaps less so for most adults. When SantaNet learns the true meaning of Christmas, it may conclude that the current celebration of the festival is incongruent with its original purpose. If that were to happen, SantaNet might just cancel Christmas altogether.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/australians-have-low-trust-in-artificial-intelligence-and-want-it-to-be-better-regulated-148262">Australians have low trust in artificial intelligence and want it to be better regulated</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/151922/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Paul Salmon receives funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Gemma Read receives funding from the Australian Research Council. </span></em></p><p class="fine-print"><em><span>Jason Thompson receives funding from The Australian Research Council (ARC) and National Health and Medical Research Council (NHMRC).</span></em></p><p class="fine-print"><em><span>Scott McLean and Tony Carden do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Imagine an advanced artificial intelligence took over from Santa. What could go wrong?Paul Salmon, Professor of Human Factors, University of the Sunshine CoastGemma Read, Senior Research Fellow in Human Factors & Sociotechnical Systems, University of the Sunshine CoastJason Thompson, Senior Research Fellow, Transport, Health and Urban Design (THUD) Research Hub, The University of MelbourneScott McLean, Research fellow, University of the Sunshine CoastTony Carden, Researcher, University of the Sunshine CoastLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1177442019-05-28T19:45:56Z2019-05-28T19:45:56ZWill we ever agree to just one set of rules on the ethical development of artificial intelligence?<figure><img src="https://images.theconversation.com/files/276700/original/file-20190528-193527-42pbed.jpg?ixlib=rb-1.1.0&rect=23%2C38%2C5152%2C3406&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Everyone has their own idea on the ethical use of AI, but can we get a global consensus?</span> <span class="attribution"><span class="source">Shutterstock/EtiAmmos</span></span></figcaption></figure><p>Australia is among 42 countries that <a href="http://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.htm">last week signed up</a> to a new set of policy guidelines for the development of artificial intelligence (AI) systems.</p>
<p>Yet Australia has its own <a href="https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/">draft guidelines for ethics in AI</a> out for public consultation, and a number of other countries and industry bodies have developed their own AI guidelines.</p>
<p>So why do we need so many guidelines, and are any of them enforceable?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/artificial-intelligence-in-australia-needs-to-get-ethical-so-we-have-a-plan-114438">Artificial intelligence in Australia needs to get ethical, so we have a plan</a>
</strong>
</em>
</p>
<hr>
<h2>The new principles</h2>
<p>The latest set of policy guidelines is the <a href="https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449">Recommendation on Artificial Intelligence</a> from the Organisation for Economic Co-operation and Development (OECD).</p>
<p>It promotes five principles for the responsible development of trustworthy AI. It also includes five complementary strategies for developing national policy and international cooperation.</p>
<p>Given this comes from the OECD, it treads the line between promoting economic improvement and innovation and fostering fundamental values and trust in the development of AI.</p>
<p>The five AI principles encourage: </p>
<ol>
<li><p>inclusive growth, sustainable development and well-being</p></li>
<li><p>human-centred values and fairness</p></li>
<li><p>transparency and explainability</p></li>
<li><p>robustness, security and safety </p></li>
<li><p>accountability.</p></li>
</ol>
<p>These recommendations are broad and do not carry the force of laws or even rules. Instead they seek to encourage member countries to incorporate these values or ethics in the development of AI.</p>
<h2>But what do we mean by AI?</h2>
<p>It is hard to make specific recommendations in relation to AI. That is partly because AI is not one thing with a single application that poses singular risks or threats.</p>
<p>Instead, AI has become a blanket term to refer to a vast number of different systems. Each is typically designed to collect and process data using computing technology, adapt to change, and act rationally to achieve its objectives, ultimately without human intervention.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/to-protect-us-from-the-risks-of-advanced-artificial-intelligence-we-need-to-act-now-107615">To protect us from the risks of advanced artificial intelligence, we need to act now</a>
</strong>
</em>
</p>
<hr>
<p>Those objectives could be as different as translating language, identifying faces, or even playing chess.</p>
<p>The type of AI that is exceptionally good at completing these objectives is often referred to as <a href="https://www.pcmag.com/encyclopedia/term/70310/narrow-ai">narrow AI</a>. A good example is a chess-playing AI. This is specifically designed to play chess – and is extremely good at it – but completely useless at other tasks.</p>
<p>On the other hand is <a href="https://www.pcmag.com/encyclopedia/term/58276/agi">general AI</a>. This is AI that it is <a href="https://theconversation.com/to-protect-us-from-the-risks-of-advanced-artificial-intelligence-we-need-to-act-now-107615">said</a> will replace human intelligence in most if not all tasks. This is still a long way off but remains the ultimate goal of some AI developers.</p>
<p>Yet it is this idea of general AI that drives many of the fears and misconceptions that surround AI.</p>
<h2>Many many guidelines</h2>
<p>Responding to these fears and a number of very real problems with narrow AI, the OECD recommendations are the latest of a number of projects and guidelines from governments and other bodies around the world that seek to instil an ethical approach to developing AI.</p>
<p>These include initiatives by the <a href="https://ethicsinaction.ieee.org/">Institute of Electrical and Electronics Engineers</a>, the <a href="https://www.cnil.fr/en/how-can-humans-keep-upper-hand-report-ethical-matters-raised-algorithms-and-artificial-intelligence">French data protection authority</a>, the <a href="https://www.pcpd.org.hk/english/news_events/media_statements/press_20181024.html">Hong Kong Office of the Privacy Commissioner</a> and the <a href="https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai">European Commission</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/call-for-independent-watchdog-to-monitor-nz-government-use-of-artificial-intelligence-117589">Call for independent watchdog to monitor NZ government use of artificial intelligence</a>
</strong>
</em>
</p>
<hr>
<p>The Australian government funded CSIRO’s Data61 to develop an AI ethics framework, which is now open for <a href="https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/">public feedback</a>, and the Australian Council of Learned Academies is yet to publish its report on the <a href="https://acola.org/artificial-intelligence-internet-of-things/">future of AI in Australia</a>.</p>
<p>The Australian Human Rights Commission, together with the World Economic Forum, is also reviewing and reporting on the <a href="https://tech.humanrights.gov.au/consultation" title="White Paper on Artificial Intelligence: Governance and Leadership">impact of AI on human rights</a>.</p>
<p>The aim of these initiatives is to encourage or to nudge ethical development of AI. But this presupposes unethical behaviour. What is the mischief in AI?</p>
<h2>Unethical AI</h2>
<p>One <a href="https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf" title="The Malicious Use of Artificial Intelligence">study</a> identified three broad potential malicious uses of AI. These target: </p>
<ul>
<li><p>digital security (for example, through cyber-attacks)</p></li>
<li><p>physical security (for example, attacks using drones or hacking)</p></li>
<li><p>political security (for example, if AI is used for mass surveillance, persuasion and deception).</p></li>
</ul>
<p>One area of concern is evolving in China, where several regions are <a href="https://www.abc.net.au/news/2018-03-31/chinas-social-credit-system-punishes-untrustworthy-citizens/9596204">developing a social credit system</a> linked to mass surveillance <a href="https://www.news.com.au/technology/consumer-issues/chinas-new-techfuelled-social-credit-system-a-dystopian-nightmare/news-story/9198a6c7b1113b03234ea86a2c5e099a">using AI technologies</a>.</p>
<p>The system can identify a person breaching social norms (such as jaywalking, consorting with criminals, or misusing social media) and debit social credit points from the individual.</p>
<p>When a credit score is reduced, that person’s freedoms (such as the freedom to travel or borrow money) are restricted. While this is not yet a nationwide system, <a href="https://www.wired.co.uk/article/china-social-credit-system-explained">reports</a> indicate this could be the ultimate aim.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chinas-social-credit-system-puts-its-people-under-pressure-to-be-model-citizens-89963">China’s Social Credit System puts its people under pressure to be model citizens</a>
</strong>
</em>
</p>
<hr>
<p>Added to these deliberate misuses of AI are several unintentional side effects of poorly constructed or implemented narrow AI. These include <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">bias</a> and <a href="https://undark.org/article/facial-recognition-technology-biased-understudied/">discrimination</a> and the <a href="https://www.ibm.com/watson/advantage-reports/future-of-artificial-intelligence/building-trust-in-ai.html">erosion of trust</a>.</p>
<h2>Building consensus on AI</h2>
<p>Societies differ on what is ethical. Even people within societies differ on what they regard as ethical behaviour. So how can there ever be a global consensus on the ethical development of AI?</p>
<p>Given the very broad scope of AI development, any policies in relation to ethical AI cannot yet be more specific until we can identify shared norms of ethical behaviour that might form the basis of some agreed global rules.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/to-protect-us-from-the-risks-of-advanced-artificial-intelligence-we-need-to-act-now-107615">To protect us from the risks of advanced artificial intelligence, we need to act now</a>
</strong>
</em>
</p>
<hr>
<p>By developing and expressing the values, rights and norms that we consider to be important now in the form of the reports and guidelines outlined above, we are working toward building trust among nations. </p>
<p>Common themes are emerging in the various guidelines, such as the need for AI that considers human rights, security, safety, transparency, trustworthiness and accountability, so we may yet be on the way to some global consensus.</p><img src="https://counter.theconversation.com/content/117744/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael Guihot does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>There are plenty of guidelines, policy documents and reports on how best we should use AI and avoid unethical practices. So how about we agree on one set of rules?Michael Guihot, Senior Lecturer in Law, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1144382019-04-05T05:16:28Z2019-04-05T05:16:28ZArtificial intelligence in Australia needs to get ethical, so we have a plan<figure><img src="https://images.theconversation.com/files/267756/original/file-20190405-180036-d87u05.jpg?ixlib=rb-1.1.0&rect=1413%2C288%2C3055%2C1849&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Artificial intelligence needs to be developed with an ethical framework.</span> <span class="attribution"><span class="source">Shutterstock/Alexander Supertramp </span></span></figcaption></figure><p>The question of whether technology is good or bad depends on how it’s developed and used. Nowhere is that more topical than in technolgies using artificial intelligence.</p>
<p>When developed and used appropriately, artificial intelligence (<a href="https://theconversation.com/au/topics/artificial-intelligence-90">AI</a>) has the potential to transform the way we live, work, communicate and travel. </p>
<p>New <a href="https://www.newscientist.com/article/2193361-ai-can-diagnose-childhood-illnesses-better-than-some-doctors/">AI-enabled medical technologies</a> are being developed to improve patient care. There are persuasive indications that autonomous vehicles will <a href="https://www.zdnet.com/article/how-autonomous-vehicles-could-save-over-350k-lives-in-the-us-and-millions-worldwide/">improve safety and reduce the road toll</a>. Machine learning and automation are streamlining workflows and allowing us to <a href="https://www.business.com/articles/9-ai-applications-to-streamline-business/">work smarter</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/to-protect-us-from-the-risks-of-advanced-artificial-intelligence-we-need-to-act-now-107615">To protect us from the risks of advanced artificial intelligence, we need to act now</a>
</strong>
</em>
</p>
<hr>
<p>Around the world, AI-enabled technology is increasingly being adopted by individuals, governments, organisations and institutions. But along with the vast potential to improve our quality of life, comes a risk to our basic human rights and freedoms. </p>
<p>Appropriate oversight, guidance and understanding of the way AI is used and developed in Australia must be prioritised.</p>
<p>AI gone wild may conjure images of <a href="https://www.imdb.com/title/tt0088247/">The Terminator</a> and <a href="https://www.imdb.com/title/tt0470752/">Ex Machina</a> movies, but it is much simpler, fundamental issues that need to be addressed at present, such as:</p>
<ul>
<li>how data is used to develop AI</li>
<li>whether an AI system is being used fairly</li>
<li>in which situations should we continue to rely on human decision-making?</li>
</ul>
<h2>We have an AI ethics plan</h2>
<p>That’s why, in partnership with government and industry, we’ve developed an <a href="https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/">ethics framework for AI in Australia</a>. The aim is to catalyse the discussion around how AI should be used and developed in Australia.</p>
<p>The ethical framework looks at various case studies from around the world to discuss how AI has been used in the past and the impacts that it has had. The case studies help us understand where things went wrong and how to avoid repeating past mistakes.</p>
<p>We also looked at what was being done around the world to address ethical concerns about AI development and use.</p>
<iframe src="https://www.google.com/maps/d/embed?mid=1ORZdoiDSrGjhCT_6XRLizBr5oRrR5KPO" width="100%" height="480"></iframe>
<p>Based on the core issues and impacts of AI, eight principles were identified to support the ethical use and development of AI in Australia.</p>
<ol>
<li><p><strong>Generates net benefits:</strong> The AI system must generate benefits for people that are greater than the costs.</p></li>
<li><p><strong>Do no harm:</strong> Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes.</p></li>
<li><p><strong>Regulatory and legal compliance:</strong> The AI system must comply with all relevant international, Australian local, state/territory and federal government obligations, regulations and laws.</p></li>
<li><p><strong>Privacy protection:</strong> Any system, including AI systems, must ensure people’s private data is protected and kept confidential and prevent data breaches that could cause reputational, psychological, financial, professional or other types of harm.</p></li>
<li><p><strong>Fairness:</strong> The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the “training data” is free from bias or characteristics which may cause the algorithm to behave unfairly.</p></li>
<li><p><strong>Transparency and explainability:</strong> People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.</p></li>
<li><p><strong>Contestability:</strong> When an algorithm impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm.</p></li>
<li><p><strong>Accountability:</strong> People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm, even if the impacts are unintended.</p></li>
</ol>
<p>In addition to the core principles various toolkit items are identified in the framework that could be used to help support these principles. These include impact assessments, ongoing monitoring and public consultation.</p>
<h2>A plan, what about action?</h2>
<p>But principles and ethical goals can only go so far. At some point we will need to get to work on deciding how we are going to implement and achieve them. </p>
<p>There are various complexities to consider when discussing the ethical use and development of AI. The vast reach of the technology has potential to impact every facet of our lives.</p>
<p>AI applications are already in use across <a href="https://www.theverge.com/circuitbreaker/2018/9/30/17914022/smart-speaker-40-percent-us-households-nielsen-amazon-echo-google-home-apple-homepod">households</a>, <a href="https://www.theaustralian.com.au/business/technology/australian-bosses-embrace-artificial-intelligence/news-story/336a20c3a1df43d21947d57be780e7d1">businesses</a> and <a href="https://www2.deloitte.com/insights/us/en/focus/cognitive-technologies/artificial-intelligence-government.html">governments</a>, most Australians are already being impacted by them. </p>
<p>There is a pressing need to examine the effects that AI has on the vulnerable and on minority groups, making sure we protect these individuals and communities from bias, discrimination and exploitation. (Remember <a href="https://theconversation.com/microsofts-racist-chatbot-tay-highlights-how-far-ai-is-from-being-truly-intelligent-56881">Tay, the racist chatbot</a>?)</p>
<p>There is also the fact that AI used in Australia will often be developed in other countries, so how do we ensure it adheres to Australian standards and expectations?</p>
<h2>Your say</h2>
<p>The framework explores these issues and forms some of Australia’s first steps on the journey towards the positive development and use of AI. But true progress needs input from stakeholders across government, business, academia and broader society.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/careful-how-you-treat-todays-ai-it-might-take-revenge-in-the-future-112611">Careful how you treat today's AI: it might take revenge in the future</a>
</strong>
</em>
</p>
<hr>
<p>That’s why ethical framework discussion paper is now <a href="https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/">open to public comment</a>. You have until May 31, 2019, to have your say in Australia’s digital future.</p>
<p>With a proactive approach to the ethical development of AI, Australia can do more than just mitigate against any risks. If we can build AI for a fairer go, we can secure a competitive advantage as well as safeguard the rights of Australians.</p><img src="https://counter.theconversation.com/content/114438/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Emma Schleiger receives funding from the Australian Government</span></em></p><p class="fine-print"><em><span>Stefan Hajkowicz receives funding from the Australian Government</span></em></p>Artificial intelligence has the potential to transform the way we live, work, communicate and travel. So long as it’s designed that way.Emma Schleiger, Research Scientist, CSIROStefan Hajkowicz, Senior Principal Scientist, Strategy and Foresight, Data61Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1126112019-03-12T19:06:38Z2019-03-12T19:06:38ZCareful how you treat today’s AI: it might take revenge in the future<figure><img src="https://images.theconversation.com/files/263283/original/file-20190312-86678-1q3ci4g.jpg?ixlib=rb-1.1.0&rect=419%2C167%2C5172%2C3437&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">We might not like the way future AI responds to us.</span> <span class="attribution"><span class="source">Shutterstock/Mykola Holyutyak </span></span></figcaption></figure><p>Artificial intelligence (AI) systems are becoming more like us. You can ask <a href="https://store.google.com/au/product/google_home">Google Home</a> to switch off your bedroom lights, much as you might ask your human partner.</p>
<p>When you text inquiries to Amazon online it’s sometimes unclear whether you’re being answered by a human or the company’s <a href="https://aws.amazon.com/lex/">chatbot technology</a>.</p>
<p>There’s clearly a market for machines with human psychological abilities. But we should spare a thought for what we might inadvertently create.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/just-like-hal-your-voice-assistant-isnt-working-for-you-even-if-it-feels-like-it-is-111177">Just like HAL, your voice assistant isn't working for you even if it feels like it is</a>
</strong>
</em>
</p>
<hr>
<p>What if we make AI so good at being human that our treatment of it can cause it to suffer? It might feel entitled to take revenge on us. </p>
<h2>Machines that ‘feel’</h2>
<p>With human psychological abilities may come sentience. Philosophers understand sentience as the capacity to suffer and to feel pleasure.</p>
<p>And sentient beings can be harmed. It’s an issue raised by the Australian philosopher <a href="https://petersinger.info/">Peter Singer</a> in his 1975 book <a href="https://books.google.com.au/books?id=9AvJCQAAQBAJ">Animal Liberation</a>, which asked how we should treat non-human animals. He wrote:</p>
<blockquote>
<p>If a being suffers, there can be no moral justification for refusing to take that suffering into consideration. No matter what the nature of the being, the principle of equality requires that its suffering be counted equally with the like suffering – insofar as rough comparisons can be made – of any other being.</p>
</blockquote>
<p>Singer has devoted a career to speaking up for animals, which are sentient beings incapable of speaking up for themselves.</p>
<h2>Speaking up for AI</h2>
<p>Researchers in AI are seeking to make an AGI or <a href="https://www.zdnet.com/article/what-is-artificial-general-intelligence/">artificial general intelligence</a> – a machine capable of any intellectual task performed by a human being. AI can already learn, but AGI will be able to perform tasks beyond that for which it is programmed.</p>
<p>The experts disagree on how far off an AGI is. The US tech inventor <a href="https://www.theguardian.com/technology/2014/feb/22/computers-cleverer-than-humans-15-years">Ray Kurzweil expects an AGI soon</a>, maybe 2029. <a href="https://arxiv.org/abs/1805.01109" title="AGI Safety Literature Review">Others think</a> we might have to wait for a century.</p>
<p>But if we are interested in treating sentient beings right, we may not have to wait until the arrival of an AGI. </p>
<p>One of Singer’s points is that many sentient beings fall far short of human intelligence. By that argument, AI doesn’t have to be as intelligent as a human for it to be sentient.</p>
<p>The problem is there is no straightforward test for sentience. </p>
<p>Sending a human crewed mission to Mars is very challenging, but at least we’ll know when we’ve done it.</p>
<p>Making a machine with feelings is challenging in a more philosophically perplexing way. Because we lack clear criteria for machine sentience, we can’t be sure when we’ve done it.</p>
<h2>Look to science fiction</h2>
<p>The ambiguity of machine sentience is a feature of several science fiction presentations of AI.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/263081/original/file-20190311-86682-12k9mbr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/263081/original/file-20190311-86682-12k9mbr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/263081/original/file-20190311-86682-12k9mbr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=336&fit=crop&dpr=1 600w, https://images.theconversation.com/files/263081/original/file-20190311-86682-12k9mbr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=336&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/263081/original/file-20190311-86682-12k9mbr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=336&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/263081/original/file-20190311-86682-12k9mbr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=422&fit=crop&dpr=1 754w, https://images.theconversation.com/files/263081/original/file-20190311-86682-12k9mbr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=422&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/263081/original/file-20190311-86682-12k9mbr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=422&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Niska (Emily Berrington) in Humans (2015).</span>
<span class="attribution"><a class="source" href="https://www.imdb.com/title/tt4122068/characters/nm4970834">Kudos, Channel 4, AMC (via IMDB)</a></span>
</figcaption>
</figure>
<p>For example, Niska is a humanoid robot, a synth, serving as a sex worker in the TV series <a href="https://www.imdb.com/title/tt4122068/">Humans</a>. We are told that, unlike most synths, she is sentient.</p>
<p>When Niska is questioned about why she killed a client she explains:</p>
<blockquote>
<p>He wanted to be rough.</p>
</blockquote>
<p>The human lawyer Laura Hawkins responds:</p>
<blockquote>
<p>But, is that wrong if he didn’t think you could feel? … Isn’t it better he exercises his fantasies with you in a brothel rather than take them out on someone who can actually feel?</p>
</blockquote>
<p>From a human perspective one could think sexual assault directed against a non-sentient machine is a victimless crime. </p>
<p>But what about a sex robot that has acquired sentience? Niska goes on to explain that she was scared by the client’s behaviour towards her.</p>
<blockquote>
<p>And I’m sorry I can’t cry or … bleed or wring my hands so you know that. But I’m telling you, I was.</p>
</blockquote>
<p>Humans is not the only science fiction story to warn of revenge attacks from machines designed to be exploited by humans for pleasure and pain.</p>
<p>In the TV remake of <a href="https://www.imdb.com/title/tt0475784/">Westworld</a>, humans enter a theme park and kill android hosts with the abandon of Xbox massacres, confident their victims have no hard feelings because they can’t have any feelings. </p>
<p>But here again, some hosts have secretly acquired sentience and get payback on their human tormentors.</p>
<h2>We’re only human</h2>
<p>Is it only science fiction? Are sentient machines a long way off? Perhaps. Perhaps not.</p>
<p>But bad habits can take a while to unlearn. We – or rather animals – are still suffering the philosophical hangover of the 17th century French thinker <a href="https://plato.stanford.edu/entries/descartes/">Rene Descartes</a>’ terrible idea that animals are mindless automata – lacking in sentience.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/to-protect-us-from-the-risks-of-advanced-artificial-intelligence-we-need-to-act-now-107615">To protect us from the risks of advanced artificial intelligence, we need to act now</a>
</strong>
</em>
</p>
<hr>
<p>If we are going to make machines with human psychological capacities, we should prepare for the possibility that they may become sentient. How then will they react to our behaviour towards them?</p>
<p>Perhaps our behaviour towards non-sentient AI today should be driven by how we would expect people to behave towards any future sentient AI that can feel, that can suffer. How we would expect that future sentient machine to react towards us?</p>
<p>This may be the big difference between machines and the animals that Singer defends. Animals cannot take revenge. But sentient machines just might.</p><img src="https://counter.theconversation.com/content/112611/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nicholas Agar does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We’re on the way to making machines that appear and act human, and can think for themselves. So how will they react to our behaviour towards them, especially the bad behaviour?Nicholas Agar, Professor of Ethics, Te Herenga Waka — Victoria University of WellingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1076152019-01-24T18:41:56Z2019-01-24T18:41:56ZTo protect us from the risks of advanced artificial intelligence, we need to act now<figure><img src="https://images.theconversation.com/files/249203/original/file-20181206-128208-g5doqn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What would Artificial General Intelligence make of the human world?</span> <span class="attribution"><span class="source">Shutterstock/Nathapol Kongseang</span></span></figcaption></figure><p>Artificial intelligence can play chess, drive a car and diagnose medical issues. Examples include Google DeepMind’s <a href="https://deepmind.com/research/alphago/">AlphaGo</a>, Tesla’s <a href="https://www.tesla.com/en_AU/autopilot">self-driving vehicles</a>, and <a href="https://www.ibm.com/watson/index.html">IBM’s Watson</a>. </p>
<p>This type of artificial intelligence is referred to as Artificial Narrow Intelligence (ANI) – non-human systems that can perform a specific task. We encounter this type on a <a href="https://emerj.com/ai-sector-overviews/everyday-examples-of-ai/">daily basis</a>, and its use is growing rapidly.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/when-ai-meets-your-shopping-experience-it-knows-what-you-buy-and-what-you-ought-to-buy-101737">When AI meets your shopping experience it knows what you buy – and what you ought to buy</a>
</strong>
</em>
</p>
<hr>
<p>But while many impressive capabilities have been demonstrated, we’re also beginning to <a href="https://www.theverge.com/2018/7/26/17619382/ibms-watson-cancer-ai-healthcare-science">see problems</a>. The worst case involved a <a href="https://www.ntsb.gov/news/press-releases/Pages/NR20180524.aspx">self-driving test car that hit a pedestrian</a> in March. The pedestrian died and the incident is still under <a href="https://www.ntsb.gov/investigations/Pages/HWY18FH010.aspx">investigation</a>. </p>
<h2>The next generation of AI</h2>
<p>With the next generation of AI the stakes will almost certainly be much higher. </p>
<p>Artificial General Intelligence (<a href="https://www.zdnet.com/article/what-is-artificial-general-intelligence/">AGI</a>) will have advanced computational powers and human level intelligence. AGI systems will be able to learn, solve problems, adapt and self-improve. They will even do tasks beyond those they were designed for. </p>
<p>Importantly, their rate of improvement could be exponential as they become far more advanced than their human creators. The introduction of AGI could quickly bring about Artificial Super Intelligence (<a href="http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials">ASI</a>).</p>
<p>While fully functioning AGI systems do not yet exist, it has been estimated that they will be with us anywhere between <a href="https://arxiv.org/abs/1805.01109">2029 and the end of the century</a>. </p>
<p>What appears almost certain is that they will arrive <a href="https://openai.com/">eventually</a>. When they do, there is a great and natural concern that we won’t be able to control them.</p>
<h2>The risks associated with AGI</h2>
<p>There is no doubt that AGI systems could transform humanity. Some of the more powerful applications include curing disease, solving complex global challenges such as climate change and food security, and initiating a worldwide technology boom.</p>
<p>But a failure to implement appropriate controls could lead to catastrophic consequences. </p>
<p>Despite what we see in <a href="https://blog.adext.com/en/artificial-intelligence-ai-movies">Hollywood movies</a>, existential threats are not likely to involve killer robots. The problem will not be one of malevolence, but rather one of intelligence, writes MIT professor Max Tegmark in his 2017 book <a href="https://www.goodreads.com/book/show/34272565-life-3-0">Life 3.0: Being Human in the Age of Artificial Intelligence</a>.</p>
<p>It is here that the science of human-machine systems – known as <a href="https://www.iea.cc/whats/index.html">Human Factors and Ergonomics</a> – will come to the fore. Risks will emerge from the fact that super-intelligent systems will identify more efficient ways of doing things, concoct their own strategies for achieving goals, and even <a href="https://futureoflife.org/background/aimyths/">develop goals of their own</a>.</p>
<p>Imagine these examples: </p>
<ul>
<li><p>an AGI system tasked with preventing HIV decides to eradicate the problem by killing everybody who carries the disease, or one tasked with curing cancer decides to kill everybody who has any genetic predisposition for it</p></li>
<li><p>an autonomous AGI military drone decides the only way to guarantee an enemy target is destroyed is to wipe out an entire community</p></li>
<li><p>an environmentally protective AGI decides the only way to slow or reverse climate change is to remove technologies and humans that induce it. </p></li>
</ul>
<p>These scenarios raise the spectre of disparate AGI systems battling each other, none of which take human concerns as their central mandate.</p>
<p>Various dystopian futures have been advanced, including those in which humans eventually become obsolete, with the subsequent <a href="https://nickbostrom.com/existential/risks.html">extinction of the human race</a>.</p>
<p>Others have forwarded less extreme but still significant disruption, including malicious use of AGI for <a href="https://arxiv.org/abs/1802.07228">terrorist and cyber-attacks</a>, the <a href="https://www.nbcnews.com/think/opinion/will-robots-take-your-job-humans-ignore-coming-ai-revolution-ncna845366">removal of the need for human work</a>, and <a href="https://www.abc.net.au/news/2018-09-18/china-social-credit-a-model-citizen-in-a-digital-dictatorship/10200278">mass surveillance</a>, to name only a few.</p>
<p>So there is a need for human-centred investigations into the safest ways to design and manage AGI to minimise risks and maximise benefits.</p>
<h2>How to control AGI</h2>
<p>Controlling AGI is not as straightforward as simply applying the same kinds of controls that tend to keep humans in check. </p>
<p>Many controls on human behaviour rely on our consciousness, our emotions, and the application of our moral values. <a href="https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742">AGIs won’t need any of these attributes to cause us harm</a>. Current forms of control are not enough. </p>
<p>Arguably, there are three sets of controls that require development and testing immediately:</p>
<ol>
<li><p>the controls required to ensure AGI system designers and developers create safe AGI systems</p></li>
<li><p>the controls that need to be built into the AGIs themselves, such as “common sense”, morals, operating procedures, decision-rules, and so on</p></li>
<li><p>the controls that need to be added to the broader systems in which AGI will operate, such as regulation, codes of practice, standard operating procedures, monitoring systems, and infrastructure. </p></li>
</ol>
<p>Human Factors and Ergonomics offers methods that can be used to identify, design and test such controls well before AGI systems arrive.</p>
<p>For example, it’s possible to model the controls that exist in a particular system, to model the likely behaviour of AGI systems within this control structure, and identify safety risks.</p>
<p>This will allow us to identify where new controls are required, design them, and then remodel to see if the risks are removed as a result. </p>
<p>In addition, our models of cognition and decision making can be used to ensure AGIs behave appropriately and have humanistic values.</p>
<h2>Act now, not later</h2>
<p>This kind of research is <a href="https://futureoflife.org/first-ai-grant-recipients/">in progress</a>, but there is not nearly enough of it and not enough disciplines are involved.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-r2d2-could-be-your-childs-teacher-sooner-than-you-think-103284">Why R2D2 could be your child's teacher sooner than you think</a>
</strong>
</em>
</p>
<hr>
<p>Even the high-profile tech entrepreneur Elon Musk has warned of the “<a href="https://youtu.be/B-Osn1gMNtw?t=210">existential crisis</a>” humanity faces from advanced AI and has spoken about the <a href="https://www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo">need to regulate AI before it’s too late</a>.</p>
<p>The next decade or so represents a critical period. There is an opportunity to create safe and efficient AGI systems that can have far reaching benefits to society and humanity. </p>
<p>At the same time, a business-as-usual approach in which we play catch-up with rapid technological advances could contribute to the extinction of the human race. The ball is in our court, but it won’t be for much longer.</p><img src="https://counter.theconversation.com/content/107615/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Paul Salmon receives funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Peter Hancock and Tony Carden do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We’re on the road to developing artificial intelligence systems that will be able to do tasks beyond those they were designed for. But will we be able to control them?Paul Salmon, Professor of Human Factors, University of the Sunshine CoastPeter Hancock, Professor of Psychology, Civil and Environmental Engineering, and Industrial Engineering and Management Systems, University of Central FloridaTony Carden, Researcher, University of the Sunshine CoastLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/823062017-08-17T23:03:14Z2017-08-17T23:03:14ZWorth reading: Bitcoin, BlackBerry, time travel and other outcomes<figure><img src="https://images.theconversation.com/files/182500/original/file-20170817-28160-zg3tnr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p><em>Editor’s note: The Conversation Canada asked our academic authors to share some recommended reading. In this instalment, Joshua Gans, an economist who wrote about how an <a href="https://theconversation.com/energy-fuels-star-trek-economy-78484">energy revolution will transform the economy and our lives</a>, offers up new picks along with a few of his favourite books.</em> </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/182488/original/file-20170817-10986-67igt9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/182488/original/file-20170817-10986-67igt9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/182488/original/file-20170817-10986-67igt9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=925&fit=crop&dpr=1 600w, https://images.theconversation.com/files/182488/original/file-20170817-10986-67igt9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=925&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/182488/original/file-20170817-10986-67igt9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=925&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/182488/original/file-20170817-10986-67igt9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1163&fit=crop&dpr=1 754w, https://images.theconversation.com/files/182488/original/file-20170817-10986-67igt9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1163&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/182488/original/file-20170817-10986-67igt9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1163&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"><em>Before Babylon, Beyond Bitcoin</em> by David Birch.</span>
<span class="attribution"><span class="source">Handout</span></span>
</figcaption>
</figure>
<h2><a href="https://www.goodreads.com/book/show/35480869-before-babylon-beyond-bitcoin"><em>Before Babylon, Beyond Bitcoin</em></a></h2>
<p>By David Birch (Non-fiction. Hardcover, 2017. London Publishing Partnership)</p>
<p>David Birch’s previous book, <a href="https://www.goodreads.com/book/show/22227908-identity-is-the-new-money"><em>Identity is the New Money</em></a>, was fantastic in the way it drew a relationship between the money you have and your identity in society. This follow-up includes an analysis of <a href="https://www.google.com/search?q=cryptocurrency">cryptocurrencies</a> such as <a href="https://bitcoin.org/en/faq#what-is-bitcoin">Bitcoin</a>. Money is a deeper issue than many economists appreciate. Indeed, it is something economists ignore by assumption: Money sits in the background without an impact itself on real economic decisions. That’s why I always value alternative perspectives that make me think. I’m looking forward to reading this one but it will have to wait until I have a good chunk of time to get the most out of it. </p>
<p> </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/182495/original/file-20170817-2389-1tz4tau.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/182495/original/file-20170817-2389-1tz4tau.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/182495/original/file-20170817-2389-1tz4tau.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=906&fit=crop&dpr=1 600w, https://images.theconversation.com/files/182495/original/file-20170817-2389-1tz4tau.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=906&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/182495/original/file-20170817-2389-1tz4tau.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=906&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/182495/original/file-20170817-2389-1tz4tau.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1139&fit=crop&dpr=1 754w, https://images.theconversation.com/files/182495/original/file-20170817-2389-1tz4tau.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1139&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/182495/original/file-20170817-2389-1tz4tau.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1139&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"><em>Losing the Signal</em> by Jacquie McNish and Sean Silcoff.</span>
<span class="attribution"><span class="source">Handout</span></span>
</figcaption>
</figure>
<h2><a href="https://www.goodreads.com/book/show/25602451-losing-the-signal"><em>Losing the Signal</em></a></h2>
<p>By Sean Silcoff and Jacquie McNish (Non-fiction. Hardcover, 2015. Flatiron Books.)</p>
<p>This book, written by two Canadian journalists, is the definitive business history of BlackBerry, maker of what was once a must-have namesake smartphone. It traces the history of the Canadian technology giant’s “extraordinary rise and spectacular fall,” to quote the subtitle. For example, the book offers unparalleled insight into how disruption can be caused by internal decisions. I believe it’s a must-read for anyone seeking to understand disruption and why successful firms fail. </p>
<p> </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/182489/original/file-20170817-28160-1aqx0ym.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/182489/original/file-20170817-28160-1aqx0ym.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/182489/original/file-20170817-28160-1aqx0ym.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=945&fit=crop&dpr=1 600w, https://images.theconversation.com/files/182489/original/file-20170817-28160-1aqx0ym.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=945&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/182489/original/file-20170817-28160-1aqx0ym.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=945&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/182489/original/file-20170817-28160-1aqx0ym.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1187&fit=crop&dpr=1 754w, https://images.theconversation.com/files/182489/original/file-20170817-28160-1aqx0ym.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1187&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/182489/original/file-20170817-28160-1aqx0ym.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1187&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"><em>On Intelligence</em> by Jeff Hawkins.</span>
<span class="attribution"><span class="source">Handout</span></span>
</figcaption>
</figure>
<h2><a href="https://www.goodreads.com/book/show/27539.On_Intelligence"><em>On Intelligence</em></a></h2>
<p>By Jeff Hawkins with Sandra Blakeslee (Non-fiction. Hardcover, 2004. St. Martin’s Press.)</p>
<p><a href="https://www.ted.com/talks/jeff_hawkins_on_how_brain_science_will_change_computing">Jeff Hawkins</a> is the inventor of the PalmPilot electronic assistant that made a pocket computer an essential personal tool and paved the way for the BlackBerry, iPhone and other mobile computers. His book is 13 years old but has <a href="https://www.wired.com/brandlab/2015/05/jeff-hawkins-firing-silicon-brain/">new relevance</a> as its central thesis — that intelligence is all about predictive ability — is now at the centre of the recent explosion in machine learning and artificial intelligence.</p>
<p> </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/180245/original/file-20170728-23788-7gyilq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/180245/original/file-20170728-23788-7gyilq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/180245/original/file-20170728-23788-7gyilq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=905&fit=crop&dpr=1 600w, https://images.theconversation.com/files/180245/original/file-20170728-23788-7gyilq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=905&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/180245/original/file-20170728-23788-7gyilq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=905&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/180245/original/file-20170728-23788-7gyilq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1138&fit=crop&dpr=1 754w, https://images.theconversation.com/files/180245/original/file-20170728-23788-7gyilq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1138&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/180245/original/file-20170728-23788-7gyilq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1138&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"><em>All Our Wrong Todays</em> by Elan Mastai.</span>
<span class="attribution"><span class="source">Handout</span></span>
</figcaption>
</figure>
<h2><a href="https://www.goodreads.com/book/show/27405006-all-our-wrong-todays"><em>All Our Wrong Todays</em></a></h2>
<p>By Elan Mastai (Fiction. Hardcover, 2017. Doubleday Canada.)</p>
<p>An interesting time travel journey wrapped up in a family drama where consequences remain consequences. It is also mostly set in Toronto, making it nicely familiar for Canadian readers. One of the things I appreciated about this book is that it deals with a big time travel problem: How can you go back in time and end up in the same physical place you started when the Earth is always moving through space — on its axis, around the sun, in the solar system, in the Milky Way — while the galaxy itself is moving through the universe? That alone makes <em>All Our Wrong Todays</em> more thoughtful than the usual offerings on this subject. [<em>Editor’s note: <a href="https://theconversation.com/worth-reading-future-visions-of-women-war-time-and-space-81658">Bryan Gaensler also recommended</a> this book in his reading list.</em>] </p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/182504/original/file-20170817-28151-1bmfkdu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/182504/original/file-20170817-28151-1bmfkdu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=925&fit=crop&dpr=1 600w, https://images.theconversation.com/files/182504/original/file-20170817-28151-1bmfkdu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=925&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/182504/original/file-20170817-28151-1bmfkdu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=925&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/182504/original/file-20170817-28151-1bmfkdu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1163&fit=crop&dpr=1 754w, https://images.theconversation.com/files/182504/original/file-20170817-28151-1bmfkdu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1163&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/182504/original/file-20170817-28151-1bmfkdu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1163&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<h2><a href="https://www.goodreads.com/book/show/703.The_Plot_Against_America"><em>The Plot Against America</em></a></h2>
<p>By Philip Roth (Fiction. Hardcover, 2004. Knopf Doubleday Publishing Group.)</p>
<p>An alternative history in which <a href="https://www.biography.com/people/charles-lindbergh-9382609">Charles Lindbergh</a>, the famous aviator, wins the presidency in 1940 and keeps the United States out of the Second World War. Suffice to say, for anyone living in 2016 and 2017 observing U.S. politics today, there is something chilling about this book given that Roth wrote it a decade ago. You will recognize the trends and concerns that perhaps make up the American mindset that leave its democracy vulnerable to populism.</p><img src="https://counter.theconversation.com/content/82306/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joshua Gans has received funding from the Sloan Foundation.</span></em></p>The future and the past, money, technology and politics documented and imagined in fact and fiction, in an economist’s recommended reading.Joshua Gans, Professor of Strategic Management, University of TorontoLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/537622016-01-27T18:01:54Z2016-01-27T18:01:54ZGoogle’s Go triumph is a milestone for artificial intelligence research<figure><img src="https://images.theconversation.com/files/109383/original/image-20160127-26788-1adobit.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Lyle J Hatch / shutterstock.com</span></span></figcaption></figure><p>Researchers from Google DeepMind have developed the first computer able to defeat a human champion at the board game Go. But why has the online giant invested millions of dollars and some of the finest minds in Artificial Intelligence (AI) research to create a computer board game player? </p>
<p>Go is not just any board game. It’s more than 2,000 years old and is played by more than <a href="http://www.britgo.org/press/faq.html">60m people</a> across the world – including a thousand professionals. Creating a superhuman computer Go player able to beat these top pros has been one of the most challenging targets of AI research for decades.</p>
<p>The rules are deceptively simple: two players take turns to place white and black “stones” on an empty 19x19 board, each aiming to encircle the most territory. Yet these basics yield a game of extraordinary beauty and complexity, full of patterns and flow. Go has many more <a href="http://tromp.github.io/go/legal.html">possible positions</a> than even chess – in fact, there are more possibilities in a game of Go than we would get by considering a separate chess game played on every atom in the universe.</p>
<p>AI researchers have therefore long regarded Go as a “grand challenge”. Whereas even the best human chess players had fallen to computers by the 1990s, Go remained unbeaten. This is a truly historic breakthrough. </p>
<h2>Games are the ‘lab rats’ of AI research</h2>
<p>Since the term “artificial intelligence” or “AI” was first coined in the 1950s, the range of problems which it can solve has been increasing at an accelerating rate. We take it for granted that Amazon has a pretty good idea of what we might want to buy, for instance, or that Google can complete our partially typed search term, though these are both due to <a href="http://www.wired.com/2015/04/now-anyone-can-tap-ai-behind-amazons-recommendations/">recent advances in AI</a>.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/109388/original/image-20160127-26823-16tq0i9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/109388/original/image-20160127-26823-16tq0i9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=462&fit=crop&dpr=1 600w, https://images.theconversation.com/files/109388/original/image-20160127-26823-16tq0i9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=462&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/109388/original/image-20160127-26823-16tq0i9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=462&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/109388/original/image-20160127-26823-16tq0i9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=580&fit=crop&dpr=1 754w, https://images.theconversation.com/files/109388/original/image-20160127-26823-16tq0i9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=580&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/109388/original/image-20160127-26823-16tq0i9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=580&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Go originated in China over 2,000 years ago and is played by millions.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/adavey/4867276096/">Alan</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Computer games have been a crucial test bed for developing and testing new AI techniques – the “lab rat” of our research. This has led to superhuman players in <a href="http://science.sciencemag.org/content/317/5836/308.1.full">checkers</a>, <a href="http://blogs.gartner.com/andrew_white/2014/03/12/the-chess-master-and-the-machine-the-truth-behind-kasparov-versus-deep-blue/">chess</a>, <a href="http://aitopics.org/topic/scrabble">Scrabble</a>, <a href="http://aitopics.org/topic/backgammon">backgammon</a> and more recently, simple forms of <a href="http://www.sciencemag.org/news/2015/01/texas-hold-em-poker-solved-computer">poker</a>. </p>
<p>Games provide a fascinating source of tough problems – they have well-defined rules and a clear target: to win. To beat these games the AIs were programmed to search forward into possible futures and choose the move which leads to the best outcome – which is similar to how good human players make decisions.</p>
<p>Yet Go proved hardest to beat because of its enormous search space and the difficulty of working out who is winning from an unfinished game position. Back in 2001, Jonathan Schaeffer, a brilliant researcher who created a perfect AI checkers player, <a href="http://aaai.org/ojs/index.php/aimagazine/article/download/1570/1469">said it would</a> “take many decades of research and development before world-championship-caliber Go programs exist”. Until now, even with recent advances, it still seemed at least ten years out of reach.</p>
<h2>The breakthrough</h2>
<p>Google’s announcement, in the journal <a href="http://nature.com/articles/doi:10.1038/nature16961">Nature</a>, details
how its machine “learned” to play Go by analysing millions of past games by professional human players and simulating thousands of possible future game states per second. Specifically, the researchers at DeepMind trained “convolutional neural networks”, algorithms that mimic the high-level structure of the brain and visual system and which have recently seen <a href="http://www.wired.com/tag/deep-learning/">an explosion in their effectiveness</a>, to predict expert moves. </p>
<p>This learning was combined with <a href="http://www.mcts.ai/">Monte Carlo tree search</a> approaches which use randomness and machine learning to intelligently search the “tree” of possible future board states. These searches have massively increased the strength of computer Go players since their invention less than ten years ago, as well as finding applications in many other domains. </p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/109384/original/image-20160127-26817-1ih81mc.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/109384/original/image-20160127-26817-1ih81mc.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=546&fit=crop&dpr=1 600w, https://images.theconversation.com/files/109384/original/image-20160127-26817-1ih81mc.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=546&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/109384/original/image-20160127-26817-1ih81mc.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=546&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/109384/original/image-20160127-26817-1ih81mc.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=686&fit=crop&dpr=1 754w, https://images.theconversation.com/files/109384/original/image-20160127-26817-1ih81mc.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=686&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/109384/original/image-20160127-26817-1ih81mc.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=686&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Only human: Fan Hui at a tournament in 2006.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/72563913@N00/131276951">lyonshinogi</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>The resulting “player” significantly outperformed all existing state-of-the-art AI players and went on to beat the current European champion, Fan Hui, 5-0 under tournament conditions.</p>
<h2>AI passes ‘Go’</h2>
<p>Now that Go has seemingly been cracked, AI needs a new grand challenge – a new “lab rat” – and it seems likely that many of these challenges will come from the $100 billion digital games industry. The ability to play alongside or against millions of engaged human players provides unique opportunities for AI research. At York’s centre for <a href="http://www.iggi.org.uk">Intelligent Games and Game Intelligence</a>, we’re working on projects such as building an AI aimed at player fun (rather than playing strength), for instance, or using games to improve well-being of people with Alzheimer’s. Collaborations between multidisciplinary labs like ours, the games industry and big business are likely to yield the next big AI breakthroughs.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/nD0lPW-cc1g?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A computer can run through thousands of these per second.</span></figcaption>
</figure>
<p>However the real world is a step up, full of ill-defined questions that are far more complex than even the trickiest of board games. The techniques which conquered Go can certainly be applied in <a href="http://www.ibm.com/smarterplanet/us/en/ibmwatson/health/">medicine</a>, <a href="https://www.glasslabgames.org/">education</a>, <a href="http://www.sciencedaily.com/releases/2015/11/151117092418.htm">science</a> or any other domain where data is available and outcomes can be evaluated and understood. </p>
<p>The big question is whether Google just helped us towards the next generation of <a href="http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html">Artificial <em>General</em> Intelligence</a>, where machines learn to truly think like – and beyond – humans. Whether we’ll see AlphaGo as a step towards Hollywood’s dreams (and nightmares) of AI agents with self-awareness, emotion and motivation remains to be seen. However the latest breakthrough points to a brave new future where AI will continue to improve our lives by helping us to make better-informed decisions in a world of ever-increasing complexity.</p><img src="https://counter.theconversation.com/content/53762/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Even the smartest AIs weren’t supposed to beat top humans at Go for another decade or more.Peter Cowling, Professor of Computer Science, University of YorkSam Devlin, Research Fellow in Artificial Intelligence and Digital Games, University of YorkLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/78812012-06-25T21:53:25Z2012-06-25T21:53:25ZThe modern phlogiston: why ‘thinking machines’ don’t need computers<figure><img src="https://images.theconversation.com/files/12165/original/ywdjk8b3-1340607080.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">There's a big difference between creating a thinking machine and modelling one.</span> <span class="attribution"><span class="source">Saad Faruque</span></span></figcaption></figure><p>In the late 1700s, French scientist Antoine <a href="http://en.wikipedia.org/wiki/Lavoisier">Lavoisier</a> proved that the mechanism behind burning is <a href="http://en.wikipedia.org/wiki/Redox">oxidation</a>. Lavoisier’s discovery killed off an eternity of dogma involving a non-existent substance called <a href="http://en.wikipedia.org/wiki/Phlogiston">phlogiston</a>. The facts spoke for Lavoisier, but phlogiston did not go quietly or quickly.</p>
<p>I find myself in a kind-of modern version of the phlogiston story with my research into <a href="http://versitaopen.com/jagi">artificial general intelligence</a>. I swim against the tide of the <a href="http://en.wikipedia.org/wiki/Received_view">received view</a> – that is, a position that is taken for granted without apparent need for criticism.</p>
<p>Allow me to set the scene with a story.</p>
<p>It’s 100,000 BCE. Your dinner is the cooling dead thing at your feet. You have fire back at camp. You have no clue what fire is, but you know it makes food edible.</p>
<p>Fast forward.</p>
<p>It’s the early 20th century and you are one of the <a href="http://en.wikipedia.org/wiki/Wright_brothers">Wright brothers</a>. Inspired by birds, you think you can make a contraption fly. You experiment with shapes in a makeshift wind tunnel and find that certain shapes drag less and lift more. Eventually you fly a few feet.</p>
<p>Fast forward.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/12170/original/sm8jv5hb-1340607925.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/12170/original/sm8jv5hb-1340607925.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/12170/original/sm8jv5hb-1340607925.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=606&fit=crop&dpr=1 600w, https://images.theconversation.com/files/12170/original/sm8jv5hb-1340607925.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=606&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/12170/original/sm8jv5hb-1340607925.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=606&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/12170/original/sm8jv5hb-1340607925.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=761&fit=crop&dpr=1 754w, https://images.theconversation.com/files/12170/original/sm8jv5hb-1340607925.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=761&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/12170/original/sm8jv5hb-1340607925.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=761&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">If you fly from Melbourne to London in a flight simulator, you’ll still be in Melbourne.</span>
<span class="attribution"><span class="source">NASA</span></span>
</figcaption>
</figure>
<p>A hundred years later, you are a trainee pilot doing <a href="http://en.wikipedia.org/wiki/Touch-and-go_landing">touch-and-go landings</a> in a simulator. A physics model of flight is in the computers running the simulator. Just for fun you stall your jetliner over a shopping mall.</p>
<p>As you leave the simulator, having flown 16,384 km and gone nowhere, you remind yourself that flight and the <em>computed physics</em> of flight are not the same thing and that, thankfully, no shoppers died when a plane crashed through the mall.</p>
<p>My point?</p>
<p>No-one needed or assumed a theory of combustion prior to cooking dinner with it. We cooked dinner and then we eventually developed a theory of combustion.</p>
<p>Likewise, we flew and <em>then</em> figured out the detailed physics of flight. Historically, <a href="http://en.wikipedia.org/wiki/Empirical_research">empirical scientific knowledge</a> grows in this way.</p>
<p>In addition, modern computing gives unprecedented power to examine physics models of the natural world. But no matter how accurate the model, if someone told you the computed model and the natural world were literally the same thing, you’d be right to question their background assumptions.</p>
<p>If there was no difference between a computed physics model of fire and fire, the computer should burst into flames. If there was no difference between a computed model of flight and flight, the computer should fly. These things don’t happen and nobody expects them to.</p>
<p>Well, almost nobody.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/12168/original/76bk4mmc-1340607603.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/12168/original/76bk4mmc-1340607603.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/12168/original/76bk4mmc-1340607603.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/12168/original/76bk4mmc-1340607603.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/12168/original/76bk4mmc-1340607603.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/12168/original/76bk4mmc-1340607603.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/12168/original/76bk4mmc-1340607603.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">You don’t need to understand chemistry to know that fire cooks food.</span>
<span class="attribution"><span class="source">a glass of water / Flickr</span></span>
</figcaption>
</figure>
<p>There is a specialised science called artificial general intelligence (AGI). This isn’t <a href="http://en.wikipedia.org/wiki/Weak_AI">artificial intelligence</a> (AI), but AGI.</p>
<p>The difference? AI solves specific problems. <a href="http://www.youtube.com/watch?v=A_nuuqlki88">Deep Blue</a> – which was built to play chess – and <a href="http://www.youtube.com/watch?v=Puhs2LuO3Zc">Watson</a> – which was built to <a href="https://theconversation.com/have-computers-finally-eclipsed-their-creators-10">win games of Jeopardy!</a> – are examples of AI. By contrast, an AGI is a modeller of the unknown: a very different prospect.</p>
<p>Quite simply, AGI is about building “thinking machines” – general-purpose systems with intelligence comparable to that of the human mind. </p>
<p>Worldwide, without exception, the solution for AGI is <em>the computer</em>. In AGI, for the first time in history, a computed model of a natural phenomenon (you, the reader) is expected to be literally indistinguishable from the natural original. At best, based on the fire and flight examples, this expectation is without precedent in science. </p>
<p>This misdirection has made lots of good AI but has <a href="http://en.wikipedia.org/wiki/AI_winter">failed to make AGI</a> non-stop since the 1950s. With this chronic failure, why has <em>nobody</em> built artificial (inorganic) brain tissue using the actual physics of cognition?</p>
<p>Neuroscience says this would involve an intricate dance between old-fashioned telephone-exchange signalling (known as <a href="http://en.wikipedia.org/wiki/Action_potential">action potentials</a>) and a more modern cell-phone-like communication called <a href="http://en.wikipedia.org/wiki/Ephaptic_coupling">electromagnetic field coupling</a>.</p>
<p>The materials used in this process are the same used in the semiconductor chip industry. The difference is in the chip architecture, packaging and interconnections.</p>
<p>Sure, it’s complicated, but as an engineer/neuroscientist I can build these things and put them in a body of some sort. Like fire and flight, I can build the AGI using inorganic brain tissue and <em>then</em> learn about how (if) it works. Like the Wright Brothers’ early flights it will stumble and fall. But learning about cognition shall ensue and then I can build AGI with the physics of cognition. And <em>then</em> I know what I can compute with a model and what I can’t.</p>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/12164/original/6mfj6f2x-1340606880.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/12164/original/6mfj6f2x-1340606880.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=700&fit=crop&dpr=1 600w, https://images.theconversation.com/files/12164/original/6mfj6f2x-1340606880.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=700&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/12164/original/6mfj6f2x-1340606880.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=700&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/12164/original/6mfj6f2x-1340606880.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=879&fit=crop&dpr=1 754w, https://images.theconversation.com/files/12164/original/6mfj6f2x-1340606880.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=879&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/12164/original/6mfj6f2x-1340606880.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=879&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">nerdabout</span></span>
</figcaption>
</figure>
<p>Sounds like a normal scientific approach to the problem, doesn’t it?</p>
<p>Try suggesting it to other researchers and research-funding providers.</p>
<p>Amid the fervent protests, AGI developers using computers profoundly confuse <a href="http://en.wikipedia.org/wiki/Map%E2%80%93territory_relation">the map with the territory</a>. You can point out the misdirection until you turn blue. They do not want to know.</p>
<p>Still confused? Well, if AGI were flight, the story would run something like this:</p>
<p>You want to fly from Melbourne to London. You build a flight simulator, get in, fly to London, get out of the simulator, and you are still in Melbourne! Undeterred, you build another flight simulator only to get the same result. And again, and again, and again.</p>
<p>Some <em>60 years</em> pass and brilliant flight simulators litter the science landscape, but flight still eludes you. At no stage does it occur to you that the physics of flight is missing.</p>
<p>Get the picture?</p>
<p>In this way, millions of dollars are spent every year chasing the AGI rainbow with computers. Billions more <a href="http://www.nature.com/news/computer-modelling-brain-in-a-box-1.10066">are in the pipeline</a>. The amount of funding, past and planned, for AGI using actual brain physics?</p>
<p>Zero.</p>
<p>Call me picky, but does this seem a little unbalanced, under-justified and, well, just plain odd?</p>
<p>The essential brain-tissue physics itself, insofar as it relates to AGI and fully understanding brain dynamics, is spectacularly under-explored for no sound reason. The mother of all low-hanging fruit awaits the end of nothing more than 17th-century thinking.</p>
<p>Have a look for yourself – there are email forums (e.g. <a href="http://dir.groups.yahoo.com/group/Fabric-of-Reality/">Fabric of Reality</a>) full of this mindset.</p>
<p>Me? I’d rather just build the artificial brain tissue and fix it scientifically like Lavoisier did.</p><img src="https://counter.theconversation.com/content/7881/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Colin Hales does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>In the late 1700s, French scientist Antoine Lavoisier proved that the mechanism behind burning is oxidation. Lavoisier’s discovery killed off an eternity of dogma involving a non-existent substance called…Colin Hales, Researcher in brain electrodynamics at the Centre for Neural Engineering, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.