tag:theconversation.com,2011:/africa/topics/singularity-1867/articlesSingularity – The Conversation2023-07-12T00:10:50Ztag:theconversation.com,2011:article/2093302023-07-12T00:10:50Z2023-07-12T00:10:50ZWhat is ‘AI alignment’? Silicon Valley’s favourite way to think about AI safety misses the real issues<figure><img src="https://images.theconversation.com/files/536744/original/file-20230711-17-fgafa9.jpg?ixlib=rb-1.1.0&rect=0%2C816%2C2638%2C1490&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/RoZWxeFL27k">Laura Ockel/Unsplash</a></span></figcaption></figure><p>As increasingly capable artificial intelligence (AI) systems become widespread, the question of the risks they may pose has taken on new urgency. Governments, researchers and developers have <a href="https://theconversation.com/calls-to-regulate-ai-are-growing-louder-but-how-exactly-do-you-regulate-a-technology-like-this-203050">highlighted</a> AI <a href="https://theconversation.com/no-ai-probably-wont-kill-us-all-and-theres-more-to-this-fear-campaign-than-meets-the-eye-206614">safety</a>. </p>
<p>The EU is moving on <a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence">AI regulation</a>, the UK is convening an <a href="https://www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-intelligence">AI safety summit</a>, and Australia is <a href="https://www.chiefscientist.gov.au/GenerativeAI">seeking</a> <a href="https://www.industry.gov.au/news/responsible-ai-australia-have-your-say">input</a> on supporting safe and responsible AI.</p>
<p>The current wave of interest is an opportunity to address concrete AI safety issues like bias, misuse and labour exploitation. But many in Silicon Valley view safety through the speculative lens of “AI alignment”, which misses out on the very real harms current AI systems can do to society – and the <a href="https://write.as/sethlazar/genb">pragmatic ways</a> we can address them.</p>
<h2>What is ‘AI alignment’?</h2>
<p>“<a href="https://brianchristian.org/the-alignment-problem/">AI alignment</a>” is about trying to make sure the behaviour of AI systems matches what we <em>want</em> and what we <em>expect</em>. Alignment research tends to focus on hypothetical future AI systems, more advanced than today’s technology.</p>
<p>It’s a challenging problem because it’s hard to predict how technology will develop, and also because humans aren’t very good at knowing what we want – or agreeing about it.</p>
<p>Nevertheless, there is no shortage of alignment research. There are a host of technical and philosophical proposals with esoteric names such as “<a href="https://arxiv.org/abs/1606.03137">Cooperative Inverse Reinforcement Learning</a>” and “<a href="https://arxiv.org/abs/1810.08575">Iterated Amplification</a>”.</p>
<p>There are two broad schools of thought. In “top-down” alignment, designers explicitly specify the values and ethical principles for AI to follow (think Asimov’s <a href="https://en.wikipedia.org/wiki/Three_Laws_of_Robotics">three laws of robotics</a>), while “bottom-up” efforts try to reverse-engineer human values from data, then build AI systems aligned with those values. There are, of course, difficulties in defining “human values”, deciding who chooses which values are important, and determining what happens when humans disagree. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1676638358087553024"}"></div></p>
<p>OpenAI, the company behind the ChatGPT chatbot and the DALL-E image generator among other products, recently outlined its plans for “<a href="https://openai.com/blog/introducing-superalignment">superalignment</a>”. This plan aims to sidestep tricky questions and align a future superintelligent AI by first building a merely human-level AI to help out with alignment research. </p>
<p>But to do this they must first align the alignment-research AI…</p>
<h2>Why is alignment supposed to be so important?</h2>
<p>Advocates of the alignment approach to AI safety say failing to “solve” AI alignment could lead to huge risks, up to and including the <a href="https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are">extinction of humanity</a>.</p>
<p>Belief in these risks largely springs from the idea that “Artificial General Intelligence” (AGI) – roughly speaking, an AI system that can do anything a human can – could be developed in the near future, and could then keep improving itself without human input. In <a href="https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd">this narrative</a>, the super-intelligent AI might then annihilate the human race, either intentionally or as a side-effect of some other project.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/no-ai-probably-wont-kill-us-all-and-theres-more-to-this-fear-campaign-than-meets-the-eye-206614">No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye</a>
</strong>
</em>
</p>
<hr>
<p>In much the same way the mere possibility of heaven and hell was enough to convince the philosopher Blaise Pascal to <a href="https://en.wikipedia.org/wiki/Pascal%27s_wager">believe in God</a>, the possibility of future super-AGI is enough to convince <a href="https://futureoflife.org/cause-area/artificial-intelligence/">some groups</a> we should devote all our efforts to “solving” AI alignment.</p>
<p>There are many <a href="https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk">philosophical</a> <a href="https://en.wikipedia.org/wiki/Pascal%27s_mugging">pitfalls</a> with this kind of reasoning. It is also very <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8909911">difficult</a> to <a href="https://www.washingtonpost.com/business/energy/why-the-future-of-technology-is-so-hard-to-predict/2022/12/28/57fd3ac2-86b0-11ed-b5ac-411280b122ef_story.html">make</a> <a href="https://academic.oup.com/poq/article-abstract/14/1/93/1817720">predictions</a> about technology. </p>
<p>Even leaving those concerns aside, alignment (let alone “superalignment”) is a limited and inadequate way to think about safety and AI systems.</p>
<h2>Three problems with AI alignment</h2>
<p>First, <strong>the concept of “alignment” is not well defined</strong>. Alignment research <a href="https://www.sciencedirect.com/science/article/pii/S0004370221001065">typically aims at vague objectives</a> like building “provably beneficial” systems, or “preventing human extinction”.</p>
<p>But these goals are quite narrow. A super-intelligent AI could meet them and still do immense harm.</p>
<p>More importantly, <strong>AI safety is about more than just machines and software</strong>. Like all technology, AI is both technical and social. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1676981309392953345"}"></div></p>
<p>Making safe AI will involve addressing a whole range of issues including the political economy of AI development, exploitative labour practices, problems with misappropriated data, and ecological impacts. We also need to be honest about the likely uses of advanced AI (such as pervasive authoritarian surveillance and social manipulation) and who will benefit along the way (entrenched technology companies).</p>
<p>Finally, <strong>treating AI alignment as a technical problem puts power in the wrong place</strong>. Technologists shouldn’t be the ones deciding what risks and which values count. </p>
<p>The rules governing AI systems should be determined by public debate and democratic institutions.</p>
<p>OpenAI is making some efforts in this regard, such as consulting with users in different fields of work during the design of ChatGPT. However, we should be wary of efforts to “solve” AI safety by merely gathering feedback from a broader pool of people, without allowing space to address bigger questions. </p>
<p>Another problem is a lack of diversity – ideological and demographic – among alignment researchers. Many have ties to Silicon Valley groups such as <a href="https://www.effectivealtruism.org/">effective altruists</a> and <a href="https://www.nytimes.com/2021/02/13/technology/slate-star-codex-rationalists.html">rationalists</a>, and there is a <a href="https://www.google.com.au/books/edition/The_Good_it_Promises_the_Harm_it_Does/zAamEAAAQBAJ?hl=en&gbpv=1&dq=demographics+of+effective+altruism&pg=PA26&printsec=frontcover">lack of representation</a> from women and other marginalised people groups who have <a href="https://facctconference.org/2023/harm-policy.html">historically been the drivers of progress</a> in understanding the harm technology can do.</p>
<h2>If not alignment, then what?</h2>
<p>The impacts of technology on society can’t be addressed using technology alone. </p>
<p>The idea of “AI alignment” positions AI companies as guardians protecting users from rogue AI, rather than the developers of AI systems that may well perpetrate harms. While safe AI is certainly a good objective, approaching this by narrowly focusing on “alignment” ignores too many pressing and potential harms.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/calls-to-regulate-ai-are-growing-louder-but-how-exactly-do-you-regulate-a-technology-like-this-203050">Calls to regulate AI are growing louder. But how exactly do you regulate a technology like this?</a>
</strong>
</em>
</p>
<hr>
<p>So what is a better way to think about AI safety? As a social and technical problem to be addressed first of all by acknowledging and addressing existing harms.</p>
<p>This isn’t to say that alignment research won’t be useful, but the framing isn’t helpful. And hare-brained schemes like OpenAI’s “superalignment” amount to kicking the meta-ethical can one block down the road, and hoping we don’t trip over it later on.</p><img src="https://counter.theconversation.com/content/209330/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Aaron J. Snoswell does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Is it possible to ‘align’ AI systems to human interest? A new plan to build an AI system to solve this problem highlights the limits of the idea.Aaron J. Snoswell, Research Fellow in AI Accountability, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1597062021-04-27T04:55:10Z2021-04-27T04:55:10ZWhy productivity growth stalled in 2005 (and isn’t about to improve)<figure><img src="https://images.theconversation.com/files/397269/original/file-20210427-15-ns0nvf.jpg?ixlib=rb-1.1.0&rect=761%2C277%2C2181%2C1189&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Hanna-Barbera/Shutterstock</span></span></figcaption></figure><p>Not long ago it seemed as if the future was going to get better and better — not long ago at all.</p>
<p>For me the high point was around 2005, fifteen years ago. </p>
<p>I don’t know if you can remember how you felt at the time, but for me the surge in living standards, driven by an ever-building surge in output per working hour (“productivity”) suggested things were building on themselves: each new innovation was making use of the ones that had come before to the point where….</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/397194/original/file-20210426-13-17nau4a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/397194/original/file-20210426-13-17nau4a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/397194/original/file-20210426-13-17nau4a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1146&fit=crop&dpr=1 600w, https://images.theconversation.com/files/397194/original/file-20210426-13-17nau4a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1146&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/397194/original/file-20210426-13-17nau4a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1146&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/397194/original/file-20210426-13-17nau4a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1440&fit=crop&dpr=1 754w, https://images.theconversation.com/files/397194/original/file-20210426-13-17nau4a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1440&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/397194/original/file-20210426-13-17nau4a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1440&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p>Ray Kurzweil, now the director of research at Google, summed it up in a book released in 2005 itself, titled <a href="http://singularity.com/BookExcerpts/TOC%20and%20Chapter%201.pdf">The Singularity Is Near</a>. </p>
<p>Singularity was “a future period during which the pace of
technological change will be so rapid, its impact so deep, that human life will
be irreversibly transformed”. </p>
<p>Changes would build on each other to the point where everything changed at once.</p>
<p>Kurzweil dubbed it the “<a href="https://www.kurzweilai.net/the-law-of-accelerating-returns">law of accelerating returns</a>”.</p>
<p>Year by year in the leadup to 2005, Australia’s productivity growth had accelerated to the point where in the 15 years to 2005 it had grown 37%. </p>
<p>If it kept accelerating… </p>
<p>In the 1930s economist John Maynard Keynes foresaw “ever larger and larger classes and groups of people from whom problems of economic necessity have been practically removed”. On average the working week might fall to <a href="http://www.econ.yale.edu/smith/econ116a/keynes1.pdf">15 hours</a>.</p>
<p>In the 1970s, futurologist Alvin Toffler spoke of a <a href="https://archive.org/details/FutureShock-Toffler/mode/2up">four-hour</a> working day.</p>
<p>And then from 2005 on productivity growth collapsed. In the 15 years since, Australia’s output per working hour (productivity) has grown by just 17%.</p>
<p>Thirty seven per cent turned out to be the high point.</p>
<hr>
<p><strong>Long-run productivity growth, Australia</strong></p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/397211/original/file-20210426-23-1n8j23m.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/397211/original/file-20210426-23-1n8j23m.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/397211/original/file-20210426-23-1n8j23m.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=406&fit=crop&dpr=1 600w, https://images.theconversation.com/files/397211/original/file-20210426-23-1n8j23m.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=406&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/397211/original/file-20210426-23-1n8j23m.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=406&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/397211/original/file-20210426-23-1n8j23m.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=511&fit=crop&dpr=1 754w, https://images.theconversation.com/files/397211/original/file-20210426-23-1n8j23m.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=511&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/397211/original/file-20210426-23-1n8j23m.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=511&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Growth in GDP per hour worked over the previous 15 years.</span>
<span class="attribution"><a class="source" href="https://www.abs.gov.au/statistics/economy/national-accounts/australian-national-accounts-national-income-expenditure-and-product/latest-release">ABS</a></span>
</figcaption>
</figure>
<hr>
<p>And not only here. In the <a href="https://www.bls.gov/opub/mlr/2021/article/the-us-productivity-slowdown-the-economy-wide-and-industry-level-analysis.htm">United States</a> and other developed economies productivity growth is divided into “before 2005” when it was rapid, and “after 2005” when it collapsed.</p>
<p>2005 is when Apple <a href="https://www.cultofmac.com/488008/jony-ive-book-excerpt-iphone/">got serious</a> about developing the iPhone. It was when many of our technological innovations really did start building on themselves.</p>
<h2>2005 is when things were meant to take off</h2>
<p>In his impressive book <a href="https://press.princeton.edu/books/hardcover/9780691147727/the-rise-and-fall-of-american-growth">The Rise and Fall of American Growth</a> economist Robert Gordon rightly points out that things like the iPhone are nothing like as genuinely useful as the innovations in the leadup to the 1940s.</p>
<p>Gordon says not a single urban home was wired for electricity in 1880, but by 1940 nearly 100% had mains power, 94% had clean piped water, 80% had flush toilets and 56% had refrigerators.</p>
<p>He says whereas as all of us could quite happily travel back in time 60 years from today and enjoy a recognisable lifestyle, we couldn’t have done it if we travelled back 60 years from the 1940s.</p>
<h2>Instead, they stagnated</h2>
<p>It’s as if the innovation we’ve had has been less useful. As if, in the words of PayPal founder <a href="https://som.yale.edu/blog/peter-thiel-at-yale-we-wanted-flying-cars-instead-we-got-140-characters">Peter Thiel</a>, “we wanted flying cars, instead we got 140 characters”.</p>
<p>Or it might be that the things we do these days are harder to automate.</p>
<p>A century ago roughly half the Australian workforce worked in service jobs — doing things such as hairdressing and writing reports. Today it’s <a href="https://www.pc.gov.au/research/ongoing/productivity-insights/services">80%</a>.</p>
<p>Back then, 45% of us worked in farming or manufacturing. Today it’s not even 10%</p>
<p>Services such as hairdressing, nursing and aged care are about as productive as they will ever be. It’s possible to cut hair or consult patients faster, but what’s lost is the time and personal attention spent doing it, which is part of the service.</p>
<h2>We might be reaching hard limits</h2>
<p>If productivity is output (the service) per unit of input (time spent), it doesn’t make sense to measure it where much of the output is the input.</p>
<p>That’s one of the reasons the Bureau of Statistics provides measures of what it calls <a href="https://www.abs.gov.au/methodologies/estimates-industry-level-klems-multifactor-productivity-methodology/2018-19">multi-factor productivity</a> for industries such as agriculture and mining, but not for “health and social assistance” which is Australia’s biggest employer.</p>
<p>The Bureau is working on a measure for health, but it thinks it will have to use as the output changed life expectancy or surveys of patient “<a href="https://www.abs.gov.au/statistics/research/enhancing-measures-non-market-output-economic-statistics-progress-paper">satisfaction</a>” with their treatment.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/have-we-just-stumbled-on-the-biggest-productivity-increase-of-the-century-145104">Have we just stumbled on the biggest productivity increase of the century?</a>
</strong>
</em>
</p>
<hr>
<p>In the US as many as 30% of workers now work in “<a href="https://treasury.gov.au/sites/default/files/2019-03/01_Persuasion.pdf">persuasive industries</a>” including advertising, public relations and the law.</p>
<p>It is almost impossible to measure their output — is it success in persuading people to change their minds?</p>
<p>For public servants and writers it is possible to measure output in terms of words produced, but deeply unhelpful. It is far from certain these workers would be more productive if they worked faster.</p>
<h2>Technology might even be sending us backwards</h2>
<p>Which is a way of saying that we might be coming up against hard limits in the amount we can squeeze out of each hour of paid work. Or perhaps not. The Singularity promises us robots that can talk to <a href="https://www.vox.com/future-perfect/2020/9/9/21418390/robots-pandemic-lonelinesisolation-elderly-seniors">dementia patients</a> and bots that can write <a href="https://www.wired.com/2017/02/robots-wrote-this-story/">political news</a>.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/397260/original/file-20210427-23-1ktvvl3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/397260/original/file-20210427-23-1ktvvl3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/397260/original/file-20210427-23-1ktvvl3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=970&fit=crop&dpr=1 600w, https://images.theconversation.com/files/397260/original/file-20210427-23-1ktvvl3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=970&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/397260/original/file-20210427-23-1ktvvl3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=970&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/397260/original/file-20210427-23-1ktvvl3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1219&fit=crop&dpr=1 754w, https://images.theconversation.com/files/397260/original/file-20210427-23-1ktvvl3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1219&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/397260/original/file-20210427-23-1ktvvl3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1219&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Computers are turning us into generalists.</span>
</figcaption>
</figure>
<p>And the application of technology might even be sending productivity backwards.</p>
<p>British economics writer Tim Harford points out that what drove the really big advances in productivity in manufacturing was specialisation.</p>
<p>The father of capitalist economics Adam Smith famously observed that a pin factory
employing 10 specialists could produce <a href="https://www.adamsmithworks.org/pin_factory.html">48,000</a> pins a day. </p>
<p>An individual who did all of those jobs working without specialised equipment could scarcely “with his utmost industry, make one pin in a day, and certainly could not make twenty”.</p>
<p>Harford says technology is turning us into <a href="https://timharford.com/2021/04/technology-has-turned-back-the-clock-on-productivity/">generalists</a>.</p>
<p>“Computers have made it easier to create and circulate messages, to book travel, to design web pages,” he says. “Instead of increasing productivity, these tools tempt highly skilled, highly paid people to noodle around making bad slides.”</p>
<h2>It’ll matter for living standards</h2>
<p>I could say worse about smartphones and the 140 (now 280) characters in Twitter.</p>
<p>They might be taking away more from our work-day output than they add to it.</p>
<p>This failure of ever increasing amounts of technology to do anything like what was expected matters because productivity growth is what we were counting on to drive economic growth and the ability of future generations to support increasing numbers of retirees.</p>
<p>Over four <a href="https://treasury.gov.au/intergenerational-report">intergenerational reports</a> the government has revised down its estimates of productivity growth and the size of the economy in four decades time. The next five-yearly report is due later this year.</p><img src="https://counter.theconversation.com/content/159706/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Martin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Productivity was meant to be growing faster and faster. It’s growing slower and slower.Peter Martin, Visiting Fellow, Crawford School of Public Policy, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1551252021-02-15T18:51:20Z2021-02-15T18:51:20ZA tiny crystal device could boost gravitational wave detectors to reveal the birth cries of black holes<figure><img src="https://images.theconversation.com/files/384196/original/file-20210215-15-u84vo1.jpg?ixlib=rb-1.1.0&rect=8%2C13%2C2986%2C2645&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">NSF / LIGO / Sonoma State University / A Simonnet</span>, <span class="license">Author provided</span></span></figcaption></figure><p>In 2017, astronomers witnessed the birth of a black hole for the first time. Gravitational wave detectors picked up the ripples in spacetime caused by <a href="https://en.wikipedia.org/wiki/GW170817">two neutron stars colliding</a> to form the black hole, and other telescopes then observed the resulting explosion.</p>
<p>But the real nitty-gritty of how the black hole formed, the movements of matter in the instants before it was sealed away inside the black hole’s event horizon, went unobserved. That’s because the gravitational waves thrown off in these final moments had such a high frequency that our current detectors can’t pick them up.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/at-last-weve-found-gravitational-waves-from-a-collapsing-pair-of-neutron-stars-85528">At last, we've found gravitational waves from a collapsing pair of neutron stars</a>
</strong>
</em>
</p>
<hr>
<p>If you could observe ordinary matter as it turns into a black hole, you would be seeing something similar to the Big Bang played backwards. The scientists who design gravitational wave detectors have been hard at work to figure out how improve our detectors to make it possible.</p>
<p>Today our team is publishing <a href="https://www.nature.com/articles/s42005-021-00526-2">a paper</a> that shows how this can be done. Our proposal could make detectors 40 times more sensitive to the high frequencies we need, allowing astronomers to listen to matter as it forms a black hole.</p>
<p>It involves creating weird new packets of energy (or “quanta”) that are a mix of two types of quantum vibrations. Devices based on this technology could be added to existing gravitational wave detectors to gain the extra sensitivity needed.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/383921/original/file-20210211-17-q8esrc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/383921/original/file-20210211-17-q8esrc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=369&fit=crop&dpr=1 600w, https://images.theconversation.com/files/383921/original/file-20210211-17-q8esrc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=369&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/383921/original/file-20210211-17-q8esrc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=369&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/383921/original/file-20210211-17-q8esrc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=464&fit=crop&dpr=1 754w, https://images.theconversation.com/files/383921/original/file-20210211-17-q8esrc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=464&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/383921/original/file-20210211-17-q8esrc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=464&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">An artist’s conception of photons interacting with a millimetre scale phononic crystal device placed in the output stage of a gravitational wave detector.</span>
<span class="attribution"><span class="source">Carl Knox / OzGrav / Swinburne University</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<h2>Quantum problems</h2>
<p>Gravitational wave detectors such as the <a href="https://en.wikipedia.org/wiki/LIGO">Laser Interferometer Gravitational-wave Observatory (LIGO)</a> in the United States use lasers to measure incredibly small changes in the distance between two mirrors. Because they measure changes 1,000 times smaller than the size of a single proton, the effects of quantum mechanics – the physics of individual particles or quanta of energy – play an important role in the way these detectors work.</p>
<p>Two different kinds of quantum packets of energy are involved, both predicted by Albert Einstein. In 1905 he predicted that light comes in packets of energy that we call <em>photons</em>; two years later, he predicted that heat and sound energy come in packets of energy called <em>phonons</em>. </p>
<p>Photons are used widely in modern technology, but phonons are much trickier to harness. Individual phonons are usually swamped by vast numbers of random phonons that are the heat of their surroundings. In gravitational wave detectors, phonons bounce around inside the detector’s mirrors, degrading their sensitivity.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/australias-part-in-the-global-effort-to-discover-gravitational-waves-54525">Australia's part in the global effort to discover gravitational waves</a>
</strong>
</em>
</p>
<hr>
<p>Five years ago physicists realised you could <a href="https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.211104">solve the problem</a> of insufficient sensitivity at high frequency with devices that <em>combine</em> phonons with photons. They showed that devices in which energy is carried in quantum packets that share the properties of both phonons and photons can have quite remarkable properties. </p>
<p>These devices would involve a radical change to a familiar concept called “resonant amplification”. Resonant amplification is what you do when you push a playground swing: if you push at the right time, all your small pushes create big swinging.</p>
<p>The new device, called a “white light cavity”, would amplify all frequencies equally. This is like a swing that you could push any old time and still end up with big results.</p>
<p>However, nobody has yet worked out how to make one of these devices, because the phonons inside it would be overwhelmed by random vibrations caused by heat.</p>
<h2>Quantum solutions</h2>
<p>In <a href="https://www.nature.com/articles/s42005-021-00526-2">our paper</a>, published in Communications Physics, we show how two different projects currently under way could do the job.</p>
<p>The Niels Bohr Institute in Copenhagen has been <a href="https://www.nature.com/articles/nnano.2017.101">developing devices</a> called phononic crystals, in which thermal vibrations are controlled by a crystal-like structure cut into a thin membrane. The Australian Centre of Excellence for Engineered Quantum Systems has also demonstrated <a href="https://www.nature.com/articles/srep02132">an alternative system</a> in which phonons are trapped inside an ultrapure quartz lens.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/383913/original/file-20210211-16-1girq3t.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/383913/original/file-20210211-16-1girq3t.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=415&fit=crop&dpr=1 600w, https://images.theconversation.com/files/383913/original/file-20210211-16-1girq3t.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=415&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/383913/original/file-20210211-16-1girq3t.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=415&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/383913/original/file-20210211-16-1girq3t.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=521&fit=crop&dpr=1 754w, https://images.theconversation.com/files/383913/original/file-20210211-16-1girq3t.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=521&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/383913/original/file-20210211-16-1girq3t.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=521&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Artist’s impression of a tiny device that could boost gravitational wave detector sensitivity in high frequencies.</span>
<span class="attribution"><span class="source">Carl Knox / OzGrav / Swinburne University</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>We show both of these systems satisfy the requirements for creating the “negative dispersion” – which spreads light frequencies in a reverse rainbow pattern – needed for white light cavities. </p>
<p>Both systems, when added to the back end of existing gravitational wave detectors, would improve the sensitivity at frequencies of a few kilohertz by the 40 times or more needed for listening to the birth of a black hole.</p>
<h2>What’s next?</h2>
<p>Our research does not represent an instant solution to improving gravitational wave detectors. There are enormous experimental challenges in making such devices into practical tools. But it does offer a route to the 40-fold improvement of gravitational wave detectors needed for observing black hole births.</p>
<p>Astrophysicists have predicted <a href="https://journals.aps.org/prd/abstract/10.1103/PhysRevD.100.043005">complex gravitational waveforms</a> created by the convulsions of neutron stars as they form black holes. These gravitational waves could allow us to listen in to the nuclear physics of a collapsing neutron star. </p>
<p>For example, it has been shown that they can clearly reveal whether the neutrons in the star remain as neutrons or whether they <a href="https://en.wikipedia.org/wiki/Quark_star">break up into a sea of quarks</a>, the tiniest subatomic particles of all. If we could observe neutrons turning into quarks and then disappearing into the black hole singularity, it would be the exact reverse of the Big Bang where out of the singularity, the particles emerged which went on to create our universe.</p><img src="https://counter.theconversation.com/content/155125/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Blair receives funding from the Australian Research Council. </span></em></p>A small add-on to existing gravitational wave detectors could reveal what happens to matter as it becomes a black hole, a process like the big bang in reverse.David Blair, Emeritus Professor, ARC Centre of Excellence for Gravitational Wave Discovery, OzGrav, The University of Western AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1486152020-10-30T12:50:08Z2020-10-30T12:50:08ZThe scariest things in the universe are black holes – and here are 3 reasons<figure><img src="https://images.theconversation.com/files/366536/original/file-20201029-21-16t10z2.jpg?ixlib=rb-1.1.0&rect=48%2C62%2C4579%2C3837&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Falling into a black hole is easily the worst way to die.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/mixed-race-businesswoman-clinging-to-colleagues-royalty-free-image/476804893?adppopup=true">John M Lund Photography Inc/Getty Images</a></span></figcaption></figure><p>Halloween is a time to be haunted by ghosts, goblins and ghouls, but nothing in the universe is scarier than a black hole.</p>
<p>Black holes – regions in space where gravity is so strong that nothing can escape – are a hot topic in the news these days. Half of the <a href="https://theconversation.com/2020-nobel-prize-in-physics-awarded-for-work-on-black-holes-an-astrophysicist-explains-the-trailblazing-discoveries-147614">2020 Nobel Prize in Physics</a> was awarded to Roger Penrose for his mathematical work showing that black holes are an inescapable consequence of Einstein’s theory of gravity. Andrea Ghez and Reinhard Genzel shared the other half for showing that <a href="https://www.nobelprize.org/prizes/physics/2020/popular-information/">a massive black hole sits at the center of our galaxy</a>. </p>
<p>Black holes are scary for three reasons. If you fell into a black hole left over when a star died, you would be shredded. Also, the massive black holes seen at the center of all galaxies have insatiable appetites. And black holes are places where the laws of physics are obliterated.</p>
<p><a href="https://scholar.google.com/citations?user=OrRLRQ4AAAAJ&hl=en">I’ve been studying black holes for over 30 years</a>. In particular, <a href="http://chrisimpey-astronomy.com/all-books">I’ve focused on the supermassive black holes</a> that lurk at the center of galaxies. Most of the time they are inactive, but when they are active and eat stars and gas, the region close to the black hole can outshine the entire galaxy that hosts them. Galaxies where the black holes are active are called <a href="https://www.universetoday.com/73222/what-is-a-quasar/">quasars</a>. With all we’ve learned about black holes over the past few decades, there are still many <a href="https://wwnorton.com/books/9780393357509">mysteries to solve</a>.</p>
<h2>Death by black hole</h2>
<p>Black holes are expected to form when a massive star dies. After the star’s nuclear fuel is exhausted, its core collapses to the densest state of matter imaginable, a hundred times denser than an atomic nucleus. That’s so dense that protons, neutrons and electrons are no longer discrete particles. Since black holes are dark, they are found when <a href="https://astronomy.com/news/2018/10/a-new-way-to-spot-black-holes-in-binary-star-systems">they orbit a normal star</a>. The properties of the normal star allow astronomers to infer the properties of its dark companion, a black hole. </p>
<p>The first black hole to be confirmed was <a href="https://doi.org/10.1088%2F0004-637X%2F742%2F2%2F84">Cygnus X-1</a>, the brightest X-ray source in the Cygnus constellation. Since then, about 50 black holes have been discovered in systems where a normal star orbits a black hole. They are the nearest examples of about <a href="https://astronomy.com/magazine/2019/08/a-brief-history-of-black-holes">10 million that are expected to be scattered through the Milky Way</a>. </p>
<p>Black holes are tombs of matter; nothing can escape them, not even light. The <a href="https://en.wikipedia.org/wiki/Spaghettification">fate of anyone falling into a black hole</a> would be a painful “spaghettification,” an idea popularized by Stephen Hawking in his book <a href="http://www.randomhousebooks.com/books/77010/">“A Brief History of Time</a>.” In spaghettification, the intense gravity of the black hole would pull you apart, separating your bones, muscles, sinews and even molecules. As the poet Dante described the words over the gates of hell in his poem Divine Comedy: Abandon hope, all ye who enter here.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/366548/original/file-20201029-21-1kj8w8f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/366548/original/file-20201029-21-1kj8w8f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/366548/original/file-20201029-21-1kj8w8f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=350&fit=crop&dpr=1 600w, https://images.theconversation.com/files/366548/original/file-20201029-21-1kj8w8f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=350&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/366548/original/file-20201029-21-1kj8w8f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=350&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/366548/original/file-20201029-21-1kj8w8f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=439&fit=crop&dpr=1 754w, https://images.theconversation.com/files/366548/original/file-20201029-21-1kj8w8f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=439&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/366548/original/file-20201029-21-1kj8w8f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=439&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A photograph of a black hole at the center of galaxy M87. The black hole is outlined by emission from hot gas swirling around it under the influence of strong gravity near its event horizon.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/in-this-handout-photo-provided-by-the-national-science-news-photo/1136111087?adppopup=true">National Science Foundation via Getty Images</a></span>
</figcaption>
</figure>
<h2>A hungry beast in every galaxy</h2>
<p>Over the past 30 years, observations with the Hubble Space Telescope have shown that <a href="https://www.spacetelescope.org/science/black_holes/">all galaxies have black holes at their centers</a>. Bigger galaxies have bigger black holes. </p>
<p>Nature knows how to make black holes over a staggering range of masses, from star corpses a few times the mass of the Sun to monsters tens of billions of times more massive. That’s like the difference between an apple and the Great Pyramid of Giza. </p>
<p>Just last year, astronomers published the <a href="https://www.nature.com/articles/d41586-019-01155-0">first-ever picture of a black hole</a> and its event horizon, a 7-billion-solar-mass beast at the center of the M87 elliptical galaxy. </p>
<p>It’s over a thousand times bigger than the black hole in our galaxy, whose discoverers snagged this year’s Nobel Prize. These black holes are dark most of the time, but when their gravity pulls in nearby stars and gas, they flare into intense activity and pump out a huge amount of radiation. Massive black holes are dangerous in two ways. If you get too close, the enormous gravity will suck you in. And if they are in their active quasar phase, you’ll be blasted by high-energy radiation. </p>
<p>How bright is a quasar? Imagine hovering over a large city like Los Angeles at night. The roughly 100 million lights from cars, houses and streets in the city correspond to the stars in a galaxy. In this analogy, the black hole in its active state is like a light source 1 inch in diameter in downtown LA that outshines the city by a factor of hundreds or thousands. Quasars are the brightest objects in the universe. </p>
<h2>Supermassive black holes are strange</h2>
<p>The <a href="https://astronomy.com/news/2019/12/this-huge-galaxy-has-the-biggest-black-hole-ever-measured">biggest black hole discovered so far</a> weighs in at 40 billion times the mass of the Sun, or 20 times the size of the solar system. Whereas the outer planets in our solar system orbit once in 250 years, this much more massive object spins once every three months. Its outer edge moves at half the speed of light. Like all black holes, the huge ones are shielded from view by an <a href="https://www.space.com/black-holes-event-horizon-explained.html">event horizon</a>. At their centers is <a href="https://www.nationalgeographic.com/science/space/universe/black-holes/">a singularity, a point in space where the density is infinite.</a> We can’t understand the interior of a black hole because the laws of physics break down. Time freezes at the event horizon and gravity becomes infinite at the singularity. </p>
<p>The good news about massive black holes is that you could survive falling into one. Although their gravity is stronger, the stretching force is weaker than it would be with a small black hole and it would not kill you. The bad news is that the event horizon marks the edge of the abyss. Nothing can escape from inside the event horizon, so you could not escape or report on your experience. </p>
<p>[<em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>.]</p>
<p>According to Stephen Hawking, black holes are <a href="https://www.vox.com/science-and-health/2018/3/14/17119320/stephen-hawking-hawking-radiation-explained">slowly evaporating</a>. In the far future of the universe, long after all stars have died and galaxies have been wrenched from view by the accelerating cosmic expansion, black holes will be the last surviving objects. </p>
<p>The most massive black holes will take an <a href="https://www.forbes.com/sites/startswithabang/2018/11/03/ask-ethan-how-do-black-holes-actually-evaporate/#353eac4f24a1">unimaginable number of years to evaporate</a>, estimated at 10 to the 100th power, or 10 with 100 zeroes after it. The scariest objects in the universe are almost eternal.</p><img src="https://counter.theconversation.com/content/148615/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Chris Impey does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The scariest beast in the universe has an insatiable appetite and shreds its victims.Chris Impey, University Distinguished Professor of Astronomy, University of ArizonaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1476142020-10-06T19:09:37Z2020-10-06T19:09:37Z2020 Nobel Prize in physics awarded for work on black holes – an astrophysicist explains the trailblazing discoveries<figure><img src="https://images.theconversation.com/files/362005/original/file-20201006-16-rmgoby.jpg?ixlib=rb-1.1.0&rect=41%2C33%2C5441%2C3489&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A black hole is an object so compact that nothing can escape its gravitational pull, not even light. They are formed when stars die and start collapsing under their own weight. Deep inside the black hole resides an infinitely hot and dense object, a so-called, singularity. </span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/computer-artwork-of-black-hole-royalty-free-illustration/140891305?adppopup=true">Science Photo Library - MARK GARLICK/Getty Images</a></span></figcaption></figure><p>Black holes are perhaps the most mysterious objects in nature. They warp space and time in extreme ways and contain a mathematical impossibility, a singularity – an infinitely hot and dense object within. But if black holes exist and are truly black, how exactly would we ever be able to make an observation?</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/361989/original/file-20201006-14-krqokp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/361989/original/file-20201006-14-krqokp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/361989/original/file-20201006-14-krqokp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/361989/original/file-20201006-14-krqokp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/361989/original/file-20201006-14-krqokp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/361989/original/file-20201006-14-krqokp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/361989/original/file-20201006-14-krqokp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/361989/original/file-20201006-14-krqokp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Andrea Ghez, the fourth woman to win the Nobel Prize in physics.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/professor-andrea-ghez-of-black-hole-apocalypse-speaks-news-photo/824984174?adppopup=true">Frederick M. Brown/Getty Images</a></span>
</figcaption>
</figure>
<p>This morning the Nobel Committee announced that the <a href="https://www.nobelprize.org/prizes/physics/2020/prize-announcement/">2020 Nobel Prize in physics</a> will be awarded to three scientists – <a href="https://penroseinstitute.com/about/roger-penrose/">Sir Roger Penrose,</a> <a href="https://physics.berkeley.edu/people/faculty/reinhard-genzel">Reinhard Genzel</a> and <a href="http://www.astro.ucla.edu/%7Eghez/">Andrea Ghez</a> – who helped discover the answers to such profound questions. Andrea Ghez is only the fourth woman to win the Nobel Prize in physics.</p>
<p><a href="http://gravity.phy.umassd.edu/main.html">Robert Penrose is a theoretical physicist who works on black holes</a>, and his work has influenced not just me but my entire generation through his <a href="https://www.goodreads.com/author/show/1409.Roger_Penrose">series of popular books</a> that are loaded with his exquisite hand-drawn illustrations of deep physical concepts. </p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/361995/original/file-20201006-20-8lqv33.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/361995/original/file-20201006-20-8lqv33.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/361995/original/file-20201006-20-8lqv33.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=563&fit=crop&dpr=1 600w, https://images.theconversation.com/files/361995/original/file-20201006-20-8lqv33.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=563&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/361995/original/file-20201006-20-8lqv33.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=563&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/361995/original/file-20201006-20-8lqv33.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=707&fit=crop&dpr=1 754w, https://images.theconversation.com/files/361995/original/file-20201006-20-8lqv33.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=707&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/361995/original/file-20201006-20-8lqv33.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=707&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Roger Penrose was famous for his detailed illustrations. This is one of his diagrams of an empty universe.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/7/7a/Penrose_diagram.svg">Roger Penrose via Wikimedia</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>As a graduate student in the 1990s at Penn State, where Penrose holds a visiting position, I had many opportunities to interact with him. For many years I was intimidated by this giant in my field, only stealing glimpses of him working in his office, sketching strange-looking scientific drawings on his blackboard. Later, when I finally got the courage to speak with him, I quickly realized that he is among the most approachable people around.</p>
<h2>Dying stars form black holes</h2>
<p><a href="https://penroseinstitute.com/about/roger-penrose/">Sir Roger Penrose</a> won half the prize for his seminal work in 1965 which proved, using a series of mathematical arguments, that under very general conditions, collapsing matter would trigger the formation of a black hole. </p>
<p>This rigorous result opened up the possibility that the astrophysical process of gravitational collapse, which occurs when a star runs out of its nuclear fuel, would lead to the formation of black holes in nature. He was also able to show that at the heart of a black hole must lie a physical singularity – an object with infinite density, where the laws of physics simply break down. At the singularity, our very conceptions of space, time and matter fall apart and resolving this issue is perhaps the biggest open problem in theoretical physics today.</p>
<p>Penrose <a href="https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.14.57">invented new mathematical concepts and techniques</a> while developing this proof. Those equations that Penrose derived in 1965 have been used by physicists studying black holes ever since. In fact, just a few years later, Stephen Hawking, alongside Penrose, used the same mathematical tools to prove that the Big Bang cosmological model – our current best model for how the entire universe came into existence – had a singularity at the very initial moment. These are results from the celebrated Penrose-Hawking <a href="http://www.personal.soton.ac.uk/dij/GR-Explorer/singularities/singtheorems.htm">Singularity Theorem</a>.</p>
<p>The fact that mathematics demonstrated that astrophysical black holes may exactly exist in nature is exactly what has energized the quest to search for them using astronomical techniques. Indeed, since Penrose’s work in the 1960s, numerous black holes have been identified.</p>
<p>[<em><a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=experts">Expertise in your inbox. Sign up for The Conversation’s newsletter and get expert takes on today’s news, every day.</a></em>]</p>
<h2>Black holes play yo-yo with stars</h2>
<p>The remaining half of the prize was shared between astronomers Reinhard Genzel and Andrea Ghez, who each lead a team that discovered the presence of a supermassive black hole, 4 million times more massive than the Sun, at the <a href="https://doi.org/10.1051/0004-6361/201833718">center of our Milky Way galaxy</a>.</p>
<p>Genzel is an astrophysicist at the Max Planck Institute for Extraterrestrial Physics, Germany and the University of California, Berkeley. Ghez is an astronomer at the University of California, Los Angeles. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/361984/original/file-20201006-22-1b5ftzt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/361984/original/file-20201006-22-1b5ftzt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/361984/original/file-20201006-22-1b5ftzt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=415&fit=crop&dpr=1 600w, https://images.theconversation.com/files/361984/original/file-20201006-22-1b5ftzt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=415&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/361984/original/file-20201006-22-1b5ftzt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=415&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/361984/original/file-20201006-22-1b5ftzt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=522&fit=crop&dpr=1 754w, https://images.theconversation.com/files/361984/original/file-20201006-22-1b5ftzt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=522&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/361984/original/file-20201006-22-1b5ftzt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=522&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The location of the black hole in the Milky Way galaxy relative to our solar system.</span>
<span class="attribution"><a class="source" href="https://www.nobelprize.org/prizes/physics/2020/press-release/">Johan Jarnestad/The Royal Swedish Academy of Sciences</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>Genzhel and Ghez used the world’s largest telescopes (Keck Observatory and the Very Large Telescope) and studied the movement of stars in a region called Sagittarius A* at the center of our galaxy. They both independently discovered that an extremely massive – 4 million times more massive than our Sun – invisible object is pulling on these stars, making them move in very unusual ways. This is considered the most convincing evidence of a black hole at the center of our galaxy. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/tMax0KgyZZU?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Movement of stars at the center of the Milky Way galaxy; evidence for the existence of a supermassive black hole. The stars are the big round bright objects therein. The arcs are tracks of their movement over time. The star symbol in the center is a dark object that everything appears to be moving around. That is indeed the supermassive black hole. This animation was created by Prof. Andrea Ghez and her research team at UCLA and are from data sets obtained with the W. M. Keck Telescopes.</span></figcaption>
</figure>
<p>This 2020 Nobel Prize, which follows on the heels of the 2017 Nobel Prize for the discovery of gravitational waves from black holes, and other recent stunning discoveries in the field – such as the the 2019 image of a black hole horizon by the Event Horizon Telescope – serve as great recognition and inspiration for all humankind, especially for those of us in the relativity and gravitation community who follow in the footsteps of Albert Einstein himself.</p><img src="https://counter.theconversation.com/content/147614/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Gaurav Khanna receives funding from the National Science Foundation. </span></em></p>The 2020 Nobel Prize in physics was awarded to three scientists – an Englishman, an American and a German – for breakthroughs in understanding the most mysterious objects in the universe: black holes.Gaurav Khanna, Professor of Physics, UMass DartmouthLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1219992019-08-22T13:41:15Z2019-08-22T13:41:15ZSingularity: how governments can halt the rise of unfriendly, unstoppable super-AI<figure><img src="https://images.theconversation.com/files/289044/original/file-20190822-170927-2jzri0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/security-concept-skull-binary-code-piracy-520958557">SWEvil</a></span></figcaption></figure><p>The invention of an artificial super-intelligence has been a central theme in science fiction since at least the 19th century. From E.M. Forster’s short story The Machine Stops (1909) to the recent HBO television series Westworld, writers have tended to portray this possibility as an unmitigated disaster. But this issue is no longer one of fiction. Prominent contemporary scientists and engineers are now also worried that super-AI could one day surpass human intelligence (an event known as the “singularity”) and become humanity’s “<a href="https://www.vox.com/future-perfect/2018/10/16/17978596/stephen-hawking-ai-climate-change-robots-future-universe-earth">worst mistake</a>”.</p>
<p>Current trends suggest we are set to enter an international <a href="https://www.ft.com/content/21eb5996-89a3-11e8-bf9e-8771d5404543">arms race</a> for such a technology. Whichever high-tech firm or government lab succeeds in inventing the first super-AI will obtain a potentially world-dominating technology. It is a winner-takes-all prize. So for those who want to stop such an event, the question is how to discourage this kind of arms race, or at least incentivise competing teams not to cut corners with AI safety.</p>
<p>A super-AI raises two fundamental challenges for its inventors, as philosopher <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12403">Nick Bostrom</a> and <a href="https://www.springerprofessional.de/racing-to-the-precipice-a-model-of-artificial-intelligence-devel/5088852?fulltextView=true">others</a> have pointed out. One is a control problem, which is how to make sure the super-AI has the same objectives as humanity. Without this, the intelligence could deliberately, accidently or by neglect destroy humanity – an “AI disaster”. </p>
<p>The second is a political problem, which is how to ensure that the benefits of a super-intelligence do not go only to a small elite, causing massive social and wealth inequalities. If a super-AI arms race occurs, it could lead competing groups to <a href="https://www.springerprofessional.de/racing-to-the-precipice-a-model-of-artificial-intelligence-devel/5088852?fulltextView=true">ignore these problems</a> in order to develop their technology more quickly. This could lead to a poor-quality or unfriendly super-AI.</p>
<p><a href="https://www.springerprofessional.de/racing-to-the-precipice-a-model-of-artificial-intelligence-devel/5088852?fulltextView=true">One suggested solution</a> is to use public policy to make it harder to enter the race in order to reduce the number of competing groups and improve the capabilities of those who do enter. The fewer who compete, the less pressure there will be to cut corners in order to win. But how can governments lessen the competition in this way?</p>
<p>My colleague Nicola Dimitri and I recently <a href="https://www.springerprofessional.de/the-race-for-an-artificial-general-intelligence-implications-for/16667660?fulltextView=true#CR20">published a paper</a> that tried to answer this question. We first showed that in a typical winner-takes all race, such as the one to build the first super-AI, only the most competitive teams will participate. This is because the probability of actually inventing the super-AI is very small, and entering the race is very expensive because of the large investment in research and development needed.</p>
<p>Indeed, this seems to be the current situation with the development of simpler “narrow” AI. <a href="https://capturedeconomy.com/some-facts-of-high-tech-patenting/">Patent applications</a> for this kind of AI are are dominated by a few firms, and the vast bulk of AI research is done in just <a href="https://www.wipo.int/publications/en/details.jsp?id=4386">three regions</a> (the US, China and Europe). There also seem to be very few, <a href="https://www.popsci.com/robot-uprising-enlightenment-now/">if any</a>, groups presently investing in building a super-AI. </p>
<p>This suggests reducing the number of competing groups isn’t the most important priority at the moment. But even with smaller numbers of competitors in the race, the intensity of competition could still lead to the problems mentioned above. So to reduce the intensity of competition between groups striving to build a super-AI and raise their capabilities, governments could turn to public procurement and taxes. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/289045/original/file-20190822-170951-192nfut.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/289045/original/file-20190822-170951-192nfut.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=350&fit=crop&dpr=1 600w, https://images.theconversation.com/files/289045/original/file-20190822-170951-192nfut.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=350&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/289045/original/file-20190822-170951-192nfut.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=350&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/289045/original/file-20190822-170951-192nfut.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=440&fit=crop&dpr=1 754w, https://images.theconversation.com/files/289045/original/file-20190822-170951-192nfut.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=440&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/289045/original/file-20190822-170951-192nfut.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=440&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Governments could encourage the integration of human and artificial intelligence.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/white-cyborg-hand-about-touch-human-1106072312?src=LTmhm804VkO5OMax_0W5mg-1-1">sdecoret/Shutterstock</a></span>
</figcaption>
</figure>
<p><a href="https://www.cambridge.org/core/books/handbook-of-procurement/CCF3672DD62B5F336594B4023111AFF5">Public procurement</a> refers to all the things governments pay private companies to provide, from software for use in government agencies to contracts to run services. Governments could impose constraints on any super-AI supplier that required them to address the potential problems, and support complementary technologies to enhance human intelligence and integrate it with AI.</p>
<p>But governments could also offer to buy a less-than-best version of super-AI, effectively creating a “second prize” in the arms race and stopping it from being a winner-takes-all competition. With an intermediate prize, which could be for inventing something close to (but not exactly) a super-AI, competing groups will have an incentive to invest and co-operate more, reducing the intensity of competition. A second prize would also reduce the risk of failure and justify more investment, helping to increase the capabilities of the competing teams.</p>
<p>As for taxes, governments could set the tax rate on the group that invents super-AI according to how friendly or unfriendly the AI is. A high enough tax rate would essentially mean the nationalisation of the super-AI. This would strongly discourage private firms from cutting corners for fear of losing their product to the state. </p>
<h2>Public good not private monopoly</h2>
<p>This idea may require better global co-ordination of taxation and regulation of super-AI. But it wouldn’t need all governments to be involved. In theory, a <a href="https://www.amazon.co.uk/Why-Cooperate-Incentive-Supply-Global/dp/0199585210">single country</a> or region (such as the EU) could carry the costs and effort involved in tackling the problems and ethics of super-AI. But all countries would benefit and super-AI would become a public good rather than an unstoppable private monopoly.</p>
<p>Of course all this depends on super-AI actually being a threat to humanity. And some scientists don’t think it will be. We might naturally <a href="https://www.popsci.com/robot-uprising-enlightenment-now/">engineer away</a> the risks of super-AI over time. Some think humans might even <a href="https://books.google.nl/books?id=e4d7DwAAQBAJ&pg=PA243&lpg=PA243&dq=Ford+Kurzweil+control+problem&source=bl&ots=10f2gbsu9-&sig=ACfU3U0iT8IYywvLudd7iwNXHx3WY1321A&hl=nl&sa=X&ved=2ahUKEwjauKCnjN_jAhWGzqQKHcSdBq4Q6AEwEXoECAgQAQ#v=onepage&q=Ford%2520Kurzweil%2520control%2520problem&f=false">merge with AI</a>.</p>
<p>Whatever the case, our planet and its inhabitants will benefit enormously from making sure we <a href="https://futureoflife.org/ai-open-letter/?cn-reloaded=1">get the best</a> from AI, a technology that is still in its <a href="https://www.ft.com/content/bf3d708c-3077-11e9-8744-e7016697f225">infancy</a>. For this, we need a better understanding of what role government can play.</p><img src="https://counter.theconversation.com/content/121999/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Wim Naudé does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We’re facing an arms race to build an artificial super-intelligence – this could be a disaster without government action.Wim Naudé, Professorial Fellow, Maastricht Economic and Social Research Institute on Innovation and Technology (UNU-MERIT), United Nations UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1205142019-07-24T09:11:46Z2019-07-24T09:11:46ZAI’s current hype and hysteria could set the technology back by decades<figure><img src="https://images.theconversation.com/files/284710/original/file-20190718-116569-15tg8ls.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">AI isn't as scary as we imagine.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/robot-arm-white-frightening-mask-red-1225945153?src=LcYiqKKRJYXpUNOn6bWCNw-1-62&studio=1">AndreyZH/Shutterstock</a></span></figcaption></figure><p>Most discussions about artificial intelligence (AI) are characterised by hyperbole and hysteria. Though some of the world’s most prominent and successful thinkers <a href="https://www.theguardian.com/books/2019/jun/27/novacene-by-james-lovelock-review">regularly forecast</a> that AI will either solve all our problems or destroy us or our society, and the press <a href="https://www.cnbc.com/2017/11/29/one-third-of-us-workers-could-be-jobless-by-2030-due-to-automation.html">frequently report</a> on how AI will threaten jobs and raise inequality, there’s actually <a href="https://www.iza.org/publications/dp/12218/the-race-against-the-robots-and-the-fallacy-of-the-giant-cheesecake-immediate-and-imagined-impacts-of-artificial-intelligence">very little evidence</a> to support these ideas. What’s more, this could actually end up turning people against AI research, bringing significant progress in the technology to a halt.</p>
<p>The hyperbole around AI largely stems from its promotion by tech-evangelists and <a href="https://theconversation.com/will-ai-spell-the-end-of-humanity-the-tech-industry-wants-you-to-think-so-67264">self-interested investors</a>. Google CEO <a href="https://www.weforum.org/agenda/2018/01/google-ceo-ai-will-be-bigger-than-electricity-or-fire">Sundar Pichai</a> declared AI to be “probably the most important thing humanity has ever worked on”. Given the importance of AI to Google’s business model, he would say that.</p>
<p>Some <a href="https://www.nytimes.com/2010/06/13/business/13sing.html">even argue</a> that AI is a solution to humanity’s fundamental problems, <a href="https://www.theguardian.com/books/2017/mar/23/to-be-a-machine-by-mark-oconnell-review">including death</a>, and that we will eventually merge with machines to become an unstoppable force. The inventor and writer <a href="https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045">Ray Kurzweil</a> has famously argued this “Singularity” will occur by as soon as 2045.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/super-intelligence-and-eternal-life-transhumanisms-faithful-follow-it-blindly-into-a-future-for-the-elite-78538">Super-intelligence and eternal life: transhumanism's faithful follow it blindly into a future for the elite</a>
</strong>
</em>
</p>
<hr>
<p>The hysteria around AI comes from similar sources. The likes of physicist <a href="https://theconversation.com/stephen-hawking-warned-about-the-perils-of-artificial-intelligence-yet-ai-gave-him-a-voice-93416">Stephen Hawking</a> and billionaire tech entrepreneur <a href="https://twitter.com/elonmusk/status/904638455761612800?ref_src=twsrc%5Etfw">Elon Musk</a> warned that AI poses an <a href="https://futureoflife.org/ai-open-letter/">existential threat</a> to humanity. If AI doesn’t destroy us, the doomsayers argue, then it may at least cause <a href="https://www.theguardian.com/books/2015/oct/01/the-rise-of-robots-humans-need-not-apply-review">mass unemployment</a> through job automation.</p>
<p>The reality of AI is currently very different, particularly when you look at the threat of automation. Back in 2013, <a href="https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf">researchers estimated</a> that, in the following ten to 20 years, 47% of jobs in the US could be automated. Six years later, instead of a trend towards mass joblessness, we’re in fact seeing US unemployment at <a href="https://www.washingtonpost.com/business/2019/05/03/us-economy-added-jobs-april-unemployment-fell-percent-lowest-since/?utm_term=.68dff8e5b96e">a historic low</a>. </p>
<p><a href="https://bruegel.org/2014/07/the-computerisation-of-european-jobs/">Even more</a> job losses have been threatened for the EU. But <a href="http://ftp.iza.org/dp12063.pdf">past evidence</a> indicates otherwise, given that between 1999 and 2010, automation created 1.5m more jobs than it destroyed in Europe.</p>
<p>AI is not even making advanced economies more productive. For example, in the ten years following the financial crisis, labour productivity in the UK grew at its <a href="https://bankunderground.co.uk/2018/04/25/bitesize-the-past-decades-productivity-growth-in-historical-context">slowest average rate</a> since 1761. Evidence shows that even global superstar firms, including firms who are among the top investors in AI and whose business models depends on it such as Google, Facebook and Amazon, have not <a href="https://www.nber.org/papers/w25529">become more productive</a>. This contradicts claims that AI will inevitably <a href="https://www.accenture.com/sk-en/insight-artificial-intelligence-future-growth">enhance productivity</a>.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/284724/original/file-20190718-116547-1lw65rp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/284724/original/file-20190718-116547-1lw65rp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/284724/original/file-20190718-116547-1lw65rp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/284724/original/file-20190718-116547-1lw65rp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/284724/original/file-20190718-116547-1lw65rp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/284724/original/file-20190718-116547-1lw65rp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/284724/original/file-20190718-116547-1lw65rp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Current AI is good at finding patterns in large datasets, and not much else.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/system-control-room-technical-operator-works-771480466?src=oWLHp_kcoR19ViQejyYrJQ-1-14&studio=1">Gorodenkoff/Shutterstock</a></span>
</figcaption>
</figure>
<p>So why are the society-transforming effects of AI not materialising? There are at least four reasons. First, AI <a href="https://www.nber.org/chapters/c14007.pdf">diffuses</a> through the economy much more slowly than most people think. This is because most current AI is based on learning from large amounts of data and it is especially difficult for <a href="https://www0.gsb.columbia.edu/faculty/lveldkamp/papers/BigDataPnP_manuscript_Veldkamp.pdf">most firms</a> to generate enough data to make the algorithms efficient or simply to afford to hire data analysts. A manifestation of the slow diffusion of AI is the growing use <a href="https://www.theguardian.com/technology/2018/jul/06/artificial-intelligence-ai-humans-bots-tech-companies">of “pseudo-AI”</a> where a firm appears to use an online AI bot to interact with customers but which is in fact a human operating behind the scenes.</p>
<p>The second reason is AI innovation is <a href="https://academic.oup.com/restud/article-abstract/76/1/283/1577537">getting harder</a>. <a href="https://theconversation.com/artificial-intelligence-heres-what-you-need-to-know-to-understand-how-machines-learn-72004">Machine learning</a> techniques that have driven recent advances may have <a href="https://voicebot.ai/2017/11/05/gartner-hype-cycle-suggests-another-ai-winter-near/">already produced</a> their most easily reached achievements and now seem to be experiencing <a href="https://www.technologyreview.com/s/612768/we-analyzed-16625-papers-to-figure-out-where-ai-is-headed-next">diminishing returns</a>. The exponentially increasing power of computer hardware, as described by <a href="https://theconversation.com/moores-law-is-50-years-old-but-will-it-continue-44511">Moore’s Law</a>, may also be <a href="https://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338">coming to an end</a>. </p>
<p>Related to this is the fact that most AI applications just aren’t that innovative, with AI mostly used to fine-tune <a href="https://www.nber.org/papers/w20379">and disrupt</a> existing products rather than introduce radically new products. For example, Carlsberg is <a href="https://www.forbes.com/sites/bernardmarr/2019/02/01/how-artificial-intelligence-is-used-to-make-beer/#5c19a6ba70cf">investing in AI</a> to help it improve the quality of its beer. But it is still beer. Heka is a US company producing a bed with <a href="https://www.prnewswire.com/news-releases/heka-launches-the-worlds-first-ai-mattress-which-can-improve-sleep-quality-through-autonomously-adapting-to-individual-body-shapes-and-postures-in-real-time-300606732.html">in-built AI</a> to help people sleep better. But it is still a bed. </p>
<p>Third, the slow growth of consumer demand in most Western countries makes it unprofitable for most businesses <a href="https://www.iza.org/publications/dp/12005/artificial-intelligence-jobs-inequality-and-productivity-does-aggregate-demand-matter">to invest in AI</a>. Yet this kind of limit to demand is almost never considered when the impacts of AI are discussed, partly because academic models of how automation will affect the economy are focused on the labour market and/or the supply side of the economy.</p>
<p>Fourth, AI is essentially not really being developed for general application. AI innovation is overwhelmingly in visual systems, ultimately aimed for use in driverless cars. Yet such cars are most notable for their absence from our roads, and technical limits mean they are likely to remain so for a <a href="https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber">long time</a>. </p>
<h2>New thinking needed</h2>
<p>Of course, AI’s small impact in the recent past doesn’t rule out larger impacts in the future. Unexpected progress in AI could still lead to a “robocalypse”. But it will have to come from a different kind of AI. What we currently call “AI” – big data and machine learning – is not really intelligent. It is essentially correlation analysis, looking for patterns in data. Machine learning generates predictions, not explanations. In contrast, human brains are storytelling devices generating explanations.</p>
<p>As a result of the hype and hysteria, many governments are scrambling to produce national <a href="https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd">AI strategies</a>. International organisations are rushing to be seen to take action, <a href="http://fiw.merit.unu.edu/">holding conferences</a> and publishing flagship reports on the <a href="https://www.weforum.org/reports/the-future-of-jobs-report-2018">future of work</a>. For example the United Nations University Centre for Policy Research <a href="https://cpr.unu.edu/tag/artificial-intelligence">claims that</a> AI is “transforming the geopolitical order” and, even more <a href="https://cpr.unu.edu/ai-global-governance-a-new-charter-of-rights-for-the-global-ai-revolution.html">incredibly</a>, that “a shift in the balance of power between intelligent machines and humans is already visible”. </p>
<p>This <a href="https://www.theguardian.com/technology/2018/jul/25/ai-artificial-intelligence-social-media-bots-wrong">“unhinged” debate</a> about the current and near-future state of AI threatens both an AI <a href="https://www.iza.org/publications/dp/11737/the-race-for-an-artificial-general-intelligence-implications-for-public-policy">arms race</a> and stifling <a href="https://theconversation.com/does-regulating-artificial-intelligence-save-humanity-or-just-stifle-innovation-85718">regulations</a>. This could lead to inappropriate controls and moreover loss of public trust in AI research. It could even hasten another AI-winter – as occurred <a href="https://www.computer.org/csdl/magazine/ex/2008/02/mex2008020002/13rRUyeCkdP">in the 1980s</a> – in which interest and funding disappear for years or even decades after a period of disappointment. All at a time when the world <a href="https://www.nber.org/papers/w18315">needs more</a>, not less, technological innovation.</p><img src="https://counter.theconversation.com/content/120514/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Wim Naudé does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Prominent thinkers exaggerating the potential or danger of artificial intelligence are pushing us towards a new AI winter.Wim Naudé, Professorial Fellow, Maastricht Economic and Social Research Institute on Innovation and Technology (UNU-MERIT), United Nations UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1111582019-02-21T11:43:23Z2019-02-21T11:43:23ZWhat alchemy and astrology can teach artificial intelligence researchers<figure><img src="https://images.theconversation.com/files/259312/original/file-20190215-56246-1vj2qqt.jpg?ixlib=rb-1.1.0&rect=186%2C379%2C2072%2C2417&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Alchemists' dreams distracted from real scientific goals.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Van_Bentum_Explosion_in_the_Alchemist%E2%80%99s_Laboratory_FA_2000.001.285.jpg">Justus Gustav van Bentum/Wikimedia Commons</a></span></figcaption></figure><p>Artificial intelligence researchers and engineers have spent a lot of effort trying to build machines that look like humans and operate largely independently. Those tempting dreams have distracted many of them from where the real progress is already happening: in systems that <a href="https://standards.ieee.org/industry-connections/ec/ead-v1.html">enhance – rather than replace – human capabilities</a>. To accelerate the shift to new ways of thinking, AI designers and developers could take some lessons from the missteps of past researchers.</p>
<p>For example, alchemists, like <a href="https://www.biography.com/news/isaac-newton-alchemy-philosophers-stone">Isaac Newton</a>, <a href="https://www.britannica.com/topic/alchemy">pursued ambitious goals</a> such as converting lead to gold, creating a panacea to cure all diseases, and <a href="https://press.uchicago.edu/ucp/books/book/chicago/S/bo12335123.html">finding potions for immortality</a>. While these goals are alluring, the charlatans pursuing them may have <a href="https://press.uchicago.edu/ucp/books/book/chicago/A/bo5506197.html">secured princely financial backing</a> that would have been better used developing modern chemistry.</p>
<p>Equally optimistically, astrologers believed they could understand human personality based on birthdates and predict future events by studying the positions of the stars and planets. These promises over the past thousand years <a href="https://www.upress.pitt.edu/books/9780822944430/">often received kingly endorsement</a>, possibly slowing the work of those who were adopting scientific methods that eventually led to astronomy.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/259313/original/file-20190215-56229-1trvern.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/259313/original/file-20190215-56229-1trvern.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/259313/original/file-20190215-56229-1trvern.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=740&fit=crop&dpr=1 600w, https://images.theconversation.com/files/259313/original/file-20190215-56229-1trvern.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=740&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/259313/original/file-20190215-56229-1trvern.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=740&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/259313/original/file-20190215-56229-1trvern.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=930&fit=crop&dpr=1 754w, https://images.theconversation.com/files/259313/original/file-20190215-56229-1trvern.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=930&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/259313/original/file-20190215-56229-1trvern.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=930&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Astrologers looked to models of the heavens for signs about the future.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Personification_of_Astrology,_by_Giovanni_Francesco_Barbieri,_called_Guercino,_c._1650-1655,_oil_on_canvas_-_Blanton_Museum_of_Art_-_Austin,_Texas_-_DSC07888.jpg">Giovanni Francesco Barbieri via Daderot/Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>As alchemy and astrology evolved, the participants became <a href="https://www.smithsonianmag.com/history/alchemy-may-not-been-pseudoscience-we-thought-it-was-180949430/">more deliberate and organized</a> – what might now be called more scientific – about their studies. That shift eventually led to important findings in chemistry, such as those by <a href="https://www.sciencehistory.org/historical-profile/antoine-laurent-lavoisier">Lavoisier</a> and <a href="https://www.sciencehistory.org/historical-profile/joseph-priestley">Priestley</a> in the 18th century. In astronomy, <a href="https://www.britannica.com/biography/Johannes-Kepler">Kepler</a> and <a href="https://www.newton.ac.uk/about/isaac-newton/life">Newton</a> himself made significant findings in the 17th and 18th centuries. A similar turning point is coming for artificial intelligence. Bold innovators are putting aside tempting but impractical dreams of anthropomorphic designs and excessive autonomy. They focus on systems that restore, rely on, and expand human control and responsibility.</p>
<h2>Updating early AI dreams</h2>
<p>Back in the 1950s, artificial intelligence researchers pursued big goals, such as human-level computational intelligence and machine consciousness. Even during the past 20 years some researchers worked toward the “<a href="http://singularity.com/">singularity</a>” fantasy of machines that are superior to humans in every way. These dreams succeeded in attracting attention from sympathetic journalists and <a href="https://www.bloomberg.com/news/articles/2018-02-15/silicon-valley-s-singularity-university-has-some-serious-reality-problems">financial backing from government and industry</a>. But to me, those aspirations still seem like counterproductive wishful thinking and B-level science fiction.</p>
<p>Even the dream of creating a <a href="https://www.therobotreport.com/humanoid-robots-watch-2019/">human-shaped robot</a> that acted like a person has lasted for more than 50 years. <a href="https://www.theverge.com/2018/6/28/17514134/honda-asimo-humanoid-robot-retire">Honda’s near-life-size Asimo</a> and the <a href="https://en.wikipedia.org/wiki/Ananova">web-based news reader Ananova</a> got a lot of <a href="https://nypost.com/2018/11/08/this-news-anchor-is-actually-an-ai-powered-robot/">media attention</a>. <a href="https://www.wired.co.uk/article/sophia-robot-citizen-womens-rights-detriot-become-human-hanson-robotics">Hanson Robotics’ Sophia</a> even received <a href="https://qz.com/1205017/saudi-arabias-robot-citizen-is-eroding-human-rights/">Saudi Arabian citizenship</a>. But they have little commercial future. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/qNoTjrgMUcs?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The robot Sophia spoke at the United Nations.</span></figcaption>
</figure>
<p>By contrast, down-to-earth user-centered designs for information search, e-commerce sites, social media and smartphone apps <a href="https://www.pearson.com/us/higher-education/program/Shneiderman-Designing-the-User-Interface-Strategies-for-Effective-Human-Computer-Interaction-6th-Edition/PGM327860.html">have been wild successes</a>. There is good reason that Amazon, Apple, Facebook, Google and Microsoft are some of the world’s biggest companies – they all use more functional, if less glamorous, types of AI.</p>
<p>Today’s cellphones feature speech recognition, face recognition and automated translation, which all <a href="https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html">use artificial intelligence technologies</a>. These functions increase human control and give users more options, without the deception and theatrics of a humanoid robot.</p>
<h2>Yielding control</h2>
<p>Efforts that pursue advanced forms of computer autonomy are also dangerous. When developers assume their machines will function correctly, they often shortchange interfaces that would allow human users to quickly take control when something goes wrong.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/259314/original/file-20190215-56212-1uczk7y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/259314/original/file-20190215-56212-1uczk7y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/259314/original/file-20190215-56212-1uczk7y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=412&fit=crop&dpr=1 600w, https://images.theconversation.com/files/259314/original/file-20190215-56212-1uczk7y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=412&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/259314/original/file-20190215-56212-1uczk7y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=412&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/259314/original/file-20190215-56212-1uczk7y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=517&fit=crop&dpr=1 754w, https://images.theconversation.com/files/259314/original/file-20190215-56212-1uczk7y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=517&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/259314/original/file-20190215-56212-1uczk7y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=517&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Investigators search through wreckage from Lion Air Flight 610 after its crash in the Java Sea in October 2018.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Indonesia-Lion-Air-Crash/22d78914a4174c1f91503f0fcd865287/174/0">AP Photo/Tatan Syuflana</a></span>
</figcaption>
</figure>
<p>These problems can be deadly. In the <a href="https://www.nytimes.com/2019/02/03/world/asia/lion-air-plane-crash-pilots.html">October 2018 crash of Lion Air’s Boeing 737 Max</a>, a sensor failure caused the newly designed automatic pilot to steer the plane downwards. The pilots couldn’t figure out how to <a href="https://www.nytimes.com/interactive/2018/11/16/world/asia/lion-air-crash-cockpit.html">override those automatic controls</a> to keep the plane in the air. Similar problems have been factors in stock market “flash crashes,” like the 2010 event in which <a href="https://en.wikipedia.org/wiki/2010_Flash_Crash">US$1 trillion disappeared in 36 minutes</a>. And poorly designed medical devices have delivered <a href="https://psnet.ahrq.gov/webmm/case/291/Death-by-PCA">deadly doses of medications</a>.</p>
<p>The <a href="https://www.ntsb.gov/investigations/accidentreports/reports/har1702.pdf">National Transportation Safety Board report on the deadly May 2016 Tesla crash</a> called for automated systems to keep detailed records that would allow investigators to analyze failures. Those insights would lead to safer and more effective designs.</p>
<h2>Getting to human-centered solutions</h2>
<p>Successful automation is all around: Navigation applications give drivers control by showing times for alternative routes. E-commerce websites show shoppers options, customer reviews and clear pricing so they can find and order the goods they need. Elevators, clothes-washing machines and airline check-in kiosks, too, have meaningful controls that enable users to get what they need done quickly and reliably. When modern cameras assist photographers in taking properly focused and exposed photos, users have a sense of mastery and accomplishment for composing the image, even as they get assistance with optimizing technical details. </p>
<p>Without being human-like or fully independent, these and thousands of other applications enable users to accomplish their tasks with self-confidence and sometimes even pride.</p>
<p>A new report from a leading engineering industry professional group urges technologists to <a href="https://standards.ieee.org/industry-connections/ec/ead-v1.html">ignore tempting fantasies</a>. Rather, the report suggests, developers should focus on technologies that support human performance and are more immediately useful.</p>
<p>In a flourishing automation-enhanced world, clear, convenient interfaces could let humans control automation to make the most of people’s initiative, creativity and responsibility. The most successful machines could be powerful tools that let users carry out ever-richer tasks with confidence, such as helping architects find innovative ways to design energy-efficient buildings, and giving journalists tools to dig deeper into data to detect fraud and corruption. Other machines could detect – not contribute to – problems like unsafe medical conditions and bias in mortgage loan approvals. Perhaps they could even advise the people responsible on ways to fix things. </p>
<p>Humans are accomplished at building tools that expand their creativity – and then at using those tools in even more innovative ways than their designers intended. In my view, it’s time to let more people be more creative more of the time, by shifting away from the alchemy and astrology phase of AI research. </p>
<p>Technology designers who appreciate and amplify the key aspects of humanity are most likely to invent the next generation of powerful tools. These designers will shift from trying to replace or simulate human behavior in machines to building wildly successful applications that people love to use.</p><img src="https://counter.theconversation.com/content/111158/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ben Shneiderman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Pursuing big, unrealistic dreams can distract from real scientific progress. It’s time for AI research to focus on restoring and expanding human control and responsibility.Ben Shneiderman, Professor of Computer Science, University of MarylandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1070622019-01-09T11:46:54Z2019-01-09T11:46:54ZRotating black holes may serve as gentle portals for hyperspace travel<figure><img src="https://images.theconversation.com/files/252924/original/file-20190108-32124-uxwxoi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Feel like traveling to another dimension? Better choose your black hole wisely.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/black-hole-abstract-space-wallpaper-universe-684787579?src=wy5Fo0KLc18wowi_WboVBA-3-94">Vadim Sadovski/Shutterstock.com</a></span></figcaption></figure><p>One of the most cherished science fiction scenarios is using a black hole as a portal to another dimension or time or universe. That fantasy may be closer to reality than previously imagined.</p>
<p>Black holes are perhaps the most mysterious objects in the universe. They are the consequence of gravity crushing a dying star without limit, leading to the formation of a true singularity – which happens when an entire star gets compressed down to a single point yielding an object with infinite density. This dense and hot singularity punches a hole in the fabric of spacetime itself, possibly opening up an opportunity for hyperspace travel. That is, a short cut through spacetime allowing for travel over cosmic scale distances in a short period. </p>
<p>Researchers previously thought that any spacecraft attempting to use a black hole as a portal of this type would have to reckon with nature at its worst. The hot and dense singularity would cause the spacecraft to endure a sequence of increasingly uncomfortable tidal stretching and squeezing before being completely vaporized.</p>
<h2>Flying through a black hole</h2>
<p><a href="http://gravity.phy.umassd.edu/">My team</a> at the University of Massachusetts Dartmouth and a colleague at Georgia Gwinnett College have shown that all black holes are not created equal. If the black hole like Sagittarius A*, located at the center of our own galaxy, is large and rotating, then the outlook for a spacecraft changes dramatically. That’s because the singularity that a spacecraft would have to contend with is very gentle and could allow for a very peaceful passage. </p>
<p>The reason that this is possible is that the relevant singularity inside a rotating black hole is technically “weak,” and thus does not damage objects that interact with it. At first, this fact may seem counter intuitive. But one can think of it as analogous to the common experience of quickly passing one’s finger through a candle’s near 2,000-degree flame, without getting burned. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/252906/original/file-20190108-32145-1s5wzth.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/252906/original/file-20190108-32145-1s5wzth.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/252906/original/file-20190108-32145-1s5wzth.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/252906/original/file-20190108-32145-1s5wzth.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/252906/original/file-20190108-32145-1s5wzth.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/252906/original/file-20190108-32145-1s5wzth.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/252906/original/file-20190108-32145-1s5wzth.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/252906/original/file-20190108-32145-1s5wzth.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Hold your finger close to the flame and it will burn. Swipe it through quickly and you won’t feel much. Similarly, passing through a large rotating black hole, you are more likely to come out the other side unharmed.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/person-takes-his-finger-very-close-1079996603">mirbasar/Shutterstock.com</a></span>
</figcaption>
</figure>
<p>My colleague <a href="https://scholar.google.com/citations?user=Zsn-SygAAAAJ&hl=en">Lior Burko</a> and <a href="https://scholar.google.com/citations?user=382Ttr0AAAAJ&hl=en">I</a> have been investigating the physics of black holes for over two decades. In 2016, my Ph.D. student, Caroline Mallary, inspired by Christopher Nolan’s blockbuster film <a href="https://www.imdb.com/title/tt0816692/">“Interstellar,”</a> set out to test if Cooper (Matthew McConaughey’s character), could survive his fall deep into Gargantua – a fictional, supermassive, rapidly rotating black hole some 100 million times the mass of our sun. “Interstellar” was based on a book written by Nobel Prize-winning astrophysicist <a href="https://www.nobelprize.org/prizes/physics/2017/thorne/facts/">Kip Thorne</a> and Gargantua’s physical properties are central to the plot of this Hollywood movie. </p>
<p>Building on work done by physicist <a href="https://phys.technion.ac.il/en/people/faculty/person/44">Amos Ori</a> two decades prior, and armed with her strong computational skills, <a href="https://doi.org/10.1103/PhysRevD.98.104024">Mallary built a computer model</a> that would capture most of the essential physical effects on a spacecraft, or any large object, falling into a large, rotating black hole like Sagittarius A*. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/252535/original/file-20190104-32124-bu84m7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/252535/original/file-20190104-32124-bu84m7.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/252535/original/file-20190104-32124-bu84m7.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/252535/original/file-20190104-32124-bu84m7.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/252535/original/file-20190104-32124-bu84m7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/252535/original/file-20190104-32124-bu84m7.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/252535/original/file-20190104-32124-bu84m7.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The fictional Miller’s planet orbiting the black hole Gargantua, in the movie ‘Interstellar.’</span>
<span class="attribution"><a class="source" href="http://interstellarfilm.wikia.com/wiki/Gargantua">interstellarfilm.wikia.com</a></span>
</figcaption>
</figure>
<h2>Not even a bumpy ride?</h2>
<p>What she discovered is that under all conditions an object falling into a rotating black hole would not experience infinitely large effects upon passage through the hole’s so-called inner horizon singularity. This is the singularity that an object entering a rotating black hole cannot maneuver around or avoid. Not only that, under the right circumstances, these effects may be negligibly small, allowing for a rather comfortable passage through the singularity. In fact, there may no noticeable effects on the falling object at all. This increases the feasibility of using large, rotating black holes as portals for hyperspace travel. </p>
<p>Mallary also discovered a feature that was not fully appreciated before: the fact that the effects of the singularity in the context of a rotating black hole would result in rapidly increasing cycles of stretching and squeezing on the spacecraft. But for very large black holes like Gargantua, the strength of this effect would be very small. So, the spacecraft and any individuals on board would not detect it. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/252558/original/file-20190105-32154-ve1zfw.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/252558/original/file-20190105-32154-ve1zfw.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=266&fit=crop&dpr=1 600w, https://images.theconversation.com/files/252558/original/file-20190105-32154-ve1zfw.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=266&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/252558/original/file-20190105-32154-ve1zfw.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=266&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/252558/original/file-20190105-32154-ve1zfw.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=334&fit=crop&dpr=1 754w, https://images.theconversation.com/files/252558/original/file-20190105-32154-ve1zfw.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=334&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/252558/original/file-20190105-32154-ve1zfw.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=334&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">This graph depicts the physical strain on the spacecraft’s steel frame as it plummets into a rotating black hole. The inset shows a detailed zoom-in for very late times. The important thing to note is that the strain increases dramatically close to the black hole, but does not grow indefinitely. Therefore, the spacecraft and its inhabitants may survive the journey.</span>
<span class="attribution"><span class="source">Khanna/UMassD</span></span>
</figcaption>
</figure>
<p>The crucial point is that these effects do not increase without bound; in fact, they stay finite, even though the stresses on the spacecraft tend to grow indefinitely as it approaches the black hole. </p>
<p>There are a few important simplifying assumptions and resulting caveats in the context of Mallary’s model. The main assumption is that the black hole under consideration is completely isolated and thus not subject to constant disturbances by a source such as another star in its vicinity or even any falling radiation. While this assumption allows important simplifications, it is worth noting that most black holes are surrounded by cosmic material – dust, gas, radiation. </p>
<p>Therefore, a natural extension of <a href="https://doi.org/10.1103/PhysRevD.98.104024">Mallary’s work</a> would be to perform a similar study in the context of a more realistic astrophysical black hole. </p>
<p>Mallary’s approach of using a computer simulation to examine the effects of a black hole on an object is very common in the field of black hole physics. Needless to say, we do not have the capability of performing real experiments in or near black holes yet, so scientists resort to theory and simulations to develop an understanding, by making predictions and new discoveries.</p><img src="https://counter.theconversation.com/content/107062/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Gaurav Khanna receives funding from NSF. </span></em></p>Feel like visiting another star system or dimension? You can do this by traveling through a spacetime portal of a black hole. But you better choose carefully. All black holes are not created equal.Gaurav Khanna, Professor of Physics, UMass DartmouthLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1070632018-12-13T11:44:14Z2018-12-13T11:44:14ZTime travel is possible – but only if you have an object with infinite mass<figure><img src="https://images.theconversation.com/files/249467/original/file-20181207-128196-1flswyg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Dr. Who used the this time machine, called the TARDIS, to travel through space and time on the BBC television show Dr. Who. </span> <span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/d/d9/Tardis_BBC_Television_Center.jpg">Babbel1996 / Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>The concept of time travel has always captured the imagination of physicists and laypersons alike. But is it really possible? Of course it is. We’re doing it right now, aren’t we? We are all traveling into the future one second at a time. </p>
<p>But that was not what you were thinking. Can we travel much further into the future? Absolutely. If we could travel close to the speed of light, or in the proximity of a black hole, time would slow down enabling us to travel arbitrarily far into the future. The really interesting question is whether we can travel back into the past. </p>
<p>I am a physics professor at the University of Massachusetts, Dartmouth, and first heard about the notion of time travel when I was 7, from a 1980 episode of Carl Sagan’s classic TV series, “<a href="https://www.imdb.com/title/tt0081846/">Cosmos</a>.” I decided right then that someday, I was going to pursue a deep study of the theory that underlies such creative and remarkable ideas: Einstein’s relativity. Twenty years later, I emerged with a Ph.D. in the field and have been an active researcher in the theory ever since. </p>
<p>One of my doctoral students has <a href="https://arxiv.org/abs/1708.09505">published a paper</a> in 2018 in the journal Classical and Quantum Gravity that describes how to build a time machine using a very simple construction. </p>
<h2>Closed time-like curves</h2>
<p>Einstein’s general theory of relativity allows for the possibility of warping time to such a high degree that it actually folds upon itself, resulting in a time loop. Imagine you’re traveling along this loop; that means that at some point, you’d end up at a moment in the past and begin experiencing the same moments since, all over again – a bit like deja vu, except you wouldn’t realize it. Such constructs are often referred to as “closed time-like curves” or CTCs in the research literature, and popularly referred to as “time machines.” Time machines are a byproduct of effective faster-than-light travel schemes and understanding them can improve our understanding of how the universe works. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/250079/original/file-20181211-76986-1villyv.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/250079/original/file-20181211-76986-1villyv.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/250079/original/file-20181211-76986-1villyv.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/250079/original/file-20181211-76986-1villyv.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/250079/original/file-20181211-76986-1villyv.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/250079/original/file-20181211-76986-1villyv.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/250079/original/file-20181211-76986-1villyv.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/250079/original/file-20181211-76986-1villyv.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Here we see a time loop. Green shows the short way through wormhole. Red shows the long way through normal space. Since the travel time on the green path could be very small compared to the red, a wormhole can allow for the possibility of time travel.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/7/7b/Wormhole-demo.png">Panzi</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Over the past few decades well-known physicists like <a href="https://doi.org/10.1103/PhysRevLett.61.1446">Kip Thorne</a> and <a href="https://doi.org/10.1103/PhysRevD.46.603">Stephen Hawking</a> produced seminal work on models related to time machines. </p>
<p>The general conclusion that has emerged from previous research, including Thorne’s and Hawking’s, is that nature forbids time loops. This is perhaps best explained in Hawking’s “<a href="https://doi.org/10.1103/PhysRevD.46.603">Chronology Protection Conjecture</a>,” which essentially says that nature doesn’t allow for changes to its past history, thus sparing us from the paradoxes that can emerge if time travel were possible. </p>
<p>Perhaps the most well-known amongst these paradoxes that emerge due to time travel into the past is the so-called “grandfather paradox” in which a traveler goes back into the past and murders his own grandfather. This alters the course of history in a way that a contradiction emerges: The traveler was never born and therefore cannot exist. There have been many movie and novel plots based on the paradoxes that result from time travel – perhaps some of the most popular ones being the “<a href="https://www.imdb.com/title/tt0088763/">Back to the Future</a>” movies and “<a href="https://www.imdb.com/title/tt0107048/">Groundhog Day</a>.” </p>
<h2>Exotic matter</h2>
<p>Depending on the details, different physical phenomena may intervene to prevent closed time-like curves from developing in physical systems. The most common is the requirement for a particular type of “exotic” matter that must be present in order for a time loop to exist. Loosely speaking, exotic matter is matter that has negative mass. The problem is negative mass is not known to exist in nature.</p>
<p>Caroline Mallary, a doctoral student at the University of Massachusetts Dartmouth has <a href="https://arxiv.org/abs/1708.09505">published a new model</a> for a time machine in the journal <a href="http://iopscience.iop.org/article/10.1088/1361-6382/aad306/meta">Classical & Quantum Gravity</a>. This new model does not require any negative mass exotic material and offers a very simple design. </p>
<p>Mallary’s model consists of two super long cars – built of material that is not exotic, and have positive mass – parked in parallel. One car moves forward rapidly, leaving the other parked. Mallary was able to show that in such a setup, a time loop can be found in the space between the cars. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/bXJiwzgKJWo?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">An animation shows how Mallary’s time loop works. As the spacecraft enters the time loop, its future self appears as well, and one can trace back the positions of both at every moment afterwards. This animation is from the perspective of an external observer, who is watching the spacecraft enter and emerge from the time loop.</span></figcaption>
</figure>
<h2>So can you build this in your backyard?</h2>
<p>If you suspect there is a catch, you are correct. Mallary’s model requires that the center of each car has infinite density. That means they contain objects – called singularities – with an infinite density, temperature and pressure. Moreover, unlike singularities that are present in the interior of black holes, which makes them totally inaccessible from the outside, the singularities in Mallary’s model are completely bare and observable, and therefore have true physical effects. </p>
<p>Physicists don’t expect such peculiar objects to exist in nature either. So, unfortunately a time machine is not going to be available anytime soon. However, this work shows that physicists may have to refine their ideas about why closed time-like curves are forbidden.</p><img src="https://counter.theconversation.com/content/107063/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Gaurav Khanna receives funding from NSF.</span></em></p>Who wouldn’t want to travel in time, glimpsing the dinosaurs or peeking at humans 2,000 years from now? Now physicists have designed a time machine that seems deceptively simple.Gaurav Khanna, Professor of Physics, UMass DartmouthLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/863192017-11-03T02:00:19Z2017-11-03T02:00:19ZAs a human, I don’t do technology. I am technology<figure><img src="https://images.theconversation.com/files/192948/original/file-20171102-26438-1bjld95.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Humans and their technologies have evolved together over time. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/kathmandu-nepal-october-2011-asian-street-676153594?src=fbDLWweGZ_ei2B7lK_raFg-1-69">Anton Jankovoy / Shutterstock.com</a></span></figcaption></figure><p>The <a href="http://www.abc.net.au/radionational/programs/boyerlectures/">2017 Boyer Lectures</a> explore what it is to be human in a digital world. In her four addresses, <a href="https://cecs.anu.edu.au/people/genevieve-bell">Professor Genevieve Bell</a> discusses how we as Australians can build a digital future that is right for us.</p>
<p>For many Australians and indeed people around the world, views of the future are caught up in a binary narrative: “technology good” or “technology bad”. </p>
<p>In this dichotomy, technology is considered to be a thing that humans have and humans do. I would like to put forward a different notion: that technology is what humans <em>are</em>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/digital-technology-may-start-a-new-scientific-revolution-in-social-research-72588">Digital technology may start a new scientific revolution in social research</a>
</strong>
</em>
</p>
<hr>
<h2>Views of technology</h2>
<p>We’re experiencing a <a href="https://www.theguardian.com/science/2017/nov/01/artificial-intelligence-risks-gm-style-public-backlash-experts-warn">great anxiety</a> when it comes to the impact of technology. This public conversation is a polarised reaction to fast-paced technological transformation. </p>
<p>On one side are the technological fetishists, who welcome a new age of technological transcendence. People like computer scientist and futurist <a href="https://en.wikipedia.org/wiki/Ray_Kurzweil">Ray Kurzweil</a> see the coming of hyper-intelligent machines, a “singularity”, as the next stage in evolution.</p>
<p>“<a href="https://en.wikipedia.org/wiki/Technological_singularity">The Singularity</a>” is the idea that artificial intelligence (AI) will reach a stage where it will be able to engineer ever more powerful AI, leading to an exponential rise in the power of AI, far surpassing human intelligence. </p>
<p>In this version of the story, technology will liberate us from death, ignorance and even the bounds of our life on Earth.</p>
<hr>
<p><em><strong>Read more:</strong> <a href="https://theconversation.com/a-survival-guide-for-the-coming-ai-revolution-72974">A survival guide for the coming AI revolution</a></em> </p>
<hr>
<p>Others are far more pessimistic about technology. Although US entrepreneur Elon Musk wants to use his technological prowess to <a href="https://theconversation.com/revealed-today-elon-musks-new-space-vision-took-us-from-earth-to-mars-and-back-home-again-84837">colonise Mars</a>, he is equally afraid of the <a href="https://www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo">consequences of AI</a>. </p>
<p>Some researchers look at the potential onslaught of <a href="http://theconversation.com/a-drop-in-the-predicted-job-losses-from-technology-taking-over-but-some-people-will-still-lose-out-86219">automation on jobs</a>. Others examine the way in which mobile phones and big data have created a <a href="https://www.theguardian.com/uk-news/2017/mar/14/public-faces-mass-invasion-of-privacy-as-big-data-and-surveillance-merge">surveillance society</a> that relentlessly competes for our ever more fragmented attention spans. </p>
<p>In this version of the story, technology is slowly dehumanising us, and has the potential to enslave or even destroy us.</p>
<p>In both attitudes, I believe our relationship to technology is mostly unconscious. This is where problems arise: technology is seen as a thing to be either praised or condemned, and something we have or do. But the question of who we are as technological beings is overlooked.</p>
<hr>
<p><em><strong>Read more:</strong> <a href="https://theconversation.com/the-future-of-artificial-intelligence-two-experts-disagree-79904">The future of artificial intelligence: two experts disagree</a></em> </p>
<hr>
<h2>Who we are</h2>
<p>Thinking about our relationship with technology has both philosophical and evolutionary elements.</p>
<p>There is <a href="https://theconversation.com/how-our-species-got-smarter-through-a-rush-of-blood-to-the-head-73856">strong evidence</a> that the physiological development of human beings went hand in hand with our technological predilection. For example, the growth in the size of the human brain over hundreds of thousands of years has been linked to the <a href="http://news.bbc.co.uk/2/hi/8543906.stm">invention of cooking</a>. When humans were able to eat and digest nutrients faster, their time was freed up for other things.</p>
<p><a href="https://theconversation.com/our-large-brains-evolved-thanks-to-an-ancient-arms-race-for-resources-and-mates-79183">Before we were human</a>, we were crafting stone tools and cooking with fire. Even under the premise that <a href="https://www.theatlantic.com/science/archive/2017/02/the-first-fire/515427/">fire was mastered in stages</a> by our pre-modern hominin ancestors, the signs of our development as technological beings were there early on. </p>
<p>Our success as technological beings essentially created what might be called a species-based “success formula”. We shaped tools and instruments, created new methods and processes, and crafted elements of the natural world into usable items. As an animal species among many other species competing for survival, this was our unique route to success.</p>
<hr>
<p><em><strong>Read more:</strong> <a href="https://theconversation.com/the-daily-life-of-a-neanderthal-revealed-from-the-gunk-in-their-teeth-73959">The daily life of a Neanderthal revealed from the gunk in their teeth</a></em> </p>
<hr>
<p>Chilean cognitive biologists Humberto Maturana and Francisco Varela <a href="https://www.amazon.com/Tree-Knowledge-Biological-Roots-Understanding/dp/0877736421">describe</a> how the cognition of a particular species is an expression of that species’ ontogeny (meaning the history or the stages in how that species developed).</p>
<p>They argue that <a href="http://www.springer.com/gp/book/9789027710154">cognition</a> is the “domain of interactions” (behaviours and actions) by which a living being furthers its success. In the case of human cognition, our “domain of interaction” is fundamentally technological. </p>
<p>But there’s a catch: the same success formula that allowed humans to conquer the planet – technological instrument power – has turned out to be the thing that now threatens our very existence. From <a href="https://theconversation.com/nuclear-weapons-how-we-learned-to-stop-worrying-until-now-83660">nuclear apocalypse</a> to <a href="https://theconversation.com/why-we-cant-rely-on-corporations-to-save-us-from-climate-change-86309">climate change</a>, <a href="https://theconversation.com/ten-years-after-the-crisis-what-is-happening-to-the-worlds-bees-77164">bee colony collapse syndrome</a> and now <a href="https://theconversation.com/what-should-governments-be-doing-about-the-rise-of-artificial-intelligence-86561">artificial intelligence</a>, we are facing challenges that our ancestors could not have imagined. </p>
<h2>Dealing with our fears</h2>
<p>US philosopher <a href="https://en.wikipedia.org/wiki/Susanne_Langer">Susanne Langer</a> argued that cinema was akin to a “<a href="https://monoskop.org/images/1/11/Langer_Susanne_K_Feeling_and_Form_A_Theory_of_Art.pdf">dream mode</a>”. In this way, we might see cinema as a way in which contemporary society engages with its unconscious - unspoken issues and anxieties that are difficult to articulate, or even taboo. </p>
<p>Films such as 2001 A Space Odyssey, The Terminator, Blade Runner, Alien, and most recently Ex Machina, all engage audiences with their unconscious, deep-seated anxieties about our relationship with technology – our technological “being-ness”.</p>
<hr>
<p><em><strong>Read more:</strong> <a href="https://theconversation.com/science-fiction-helps-us-deal-with-science-fact-a-lesson-from-terminators-killer-robots-50249">Science fiction helps us deal with science fact: a lesson from Terminator’s killer robots</a></em> </p>
<hr>
<p>Many of these films depict our relationship with technology as pathological, indeed humanicidal. They are simply reflecting, however, what is obvious from our own experience of the 20th century. Science, technology and industrialisation through the 19th and 20th centuries formed a kind of virtuous “<a href="https://openlibrary.org/books/OL1420527M/A_history_of_civilizations">trinity of development</a>” and greatly expanded human instrumental power.</p>
<p>At the same time we saw a phase shift in our capacity to kill each other. In World War I this took the form of poison gas, machine guns, and artillery. At the close of World War II, the deadly efficiency of the Holocaust and the atomic bomb became clear. </p>
<p>From this point on, even bigger problems have emerged: the impact of economic growth on ecosystems and climate change (both issues that fundamentally threaten the existence of all humans). The German sociologist Ulrich Beck, who only recently passed away, talked about this as a “<a href="http://au.wiley.com/WileyCDA/WileyTitle/productCd-0745622216.html">world risk society</a>”. He argued that we had institutionalised the production of social risk through our contemporary industrial innovation systems.</p>
<h2>We are technological beings</h2>
<p>Given this, there is a better way for us to see and understand the challenges that we face in relation to technology. The answer is not to disown or push away all technology. As we are technological beings, this is a basic negation of who we are. </p>
<hr>
<p><em><strong>Read more:</strong> <a href="https://theconversation.com/an-ai-professor-explains-three-concerns-about-granting-citizenship-to-robot-sophia-86479">An AI professor explains: three concerns about granting citizenship to robot Sophia</a></em> </p>
<hr>
<p>Nor is the answer to uncritically fetishise technology as a good, as this ignores our unconscious anxieties with our technological existence. Instead, we need to come to grips with what our unconscious is trying to tell us, encoded for example through our cinematic art and our history. </p>
<p>As such we need to ask the question: what does it mean to be a responsible, mature and wise technological being? </p>
<p>If we can ask and get good answers to this type of question, I believe there is hope for us to steer paths to viable futures. </p>
<p>As <a href="http://www.abc.net.au/radionational/programs/boyerlectures/">2017 Boyer lecturer Bell</a> says, we can plan to “shape a world in which we might all want to live”.</p><img src="https://counter.theconversation.com/content/86319/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jose Ramos is founder and director of Action Foresight, a Melbourne based business. He has received funding from the P2P Foundation and the Victorian Eco Innovation Lab for research projects, as well as from businesses and government departments for various strategic foresight projects.</span></em></p>What does it mean to be a responsible, mature and wise technological being? Our future lies in seeking real answers to this type of question.Jose Ramos, Lecturer, Globalisation Studies, Victoria UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/664342016-10-04T07:24:44Z2016-10-04T07:24:44ZWhy watching Westworld’s robots should make us question ourselves<figure><img src="https://images.theconversation.com/files/140128/original/image-20161003-20228-19xbxhg.png?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">HBO</span></span></figcaption></figure><p>For a sci-fi fan like me, fascinated by the nature of human intelligence and the possibility of building life-like robots, it’s always interesting to find a new angle on these questions. As a re-imagining of the original 1970s science fiction film set in a cowboy-themed, hyper-real adult theme park populated by robots that look and act like people, <a href="http://www.hbo.com/westworld">Westworld</a> does not disappoint.</p>
<p>Westworld challenges us to consider the difference between being human and being a robot. From the beginning of this new serialisation on HBO we are confronted with scenes of graphic human-on-robot violence. But the robots in Westworld have more than just human-like physical bodies, they display emotion including extreme pain, they see and recognise each other’s suffering, they bleed and even die. What makes this acceptable, at least within Westworld’s narrative, is that they are just extremely life-like human simulations; while their behaviour is realistically automated, there is “nobody home”.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/pmbYjVy9Xcg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>But from the start, this notion that a machine of such complexity is still merely a machine is undermined by constant reminders that they are also so much like us. The disturbing message, echoing that of previous sci-fi classics such as <a href="https://www.theguardian.com/film/2015/mar/14/why-blade-runner-is-timeless">Blade Runner</a> and <a href="http://www.telegraph.co.uk/culture/film/11189723/AI-revisited-a-misunderstood-classic.html">AI</a>, is that machines could one day be so close to human as to be indistinguishable – not just in intellect and appearance, but also in moral terms.</p>
<p>At the same time, by presenting an alternate view of the human condition through the technological mirror of life-like robots, Westworld causes us to reflect that we are perhaps also just sophisticated machines, albeit of a biological kind – an idea that has been <a href="http://www.ted.com/talks/dan_dennett_on_our_consciousness?language=en">forcefully argued by the philosopher Daniel Dennett</a>. </p>
<p>The unfortunate robots in Westworld have, at least initially, no insight into their existential plight. They enter into each new day programmed with enthusiasm and hope, oblivious to its pre-scripted violence and tragedy. We may pity these automatons their fate – but how closely does this blinkered ignorance, belief in convenient fictions, and misguided presumption of agency resemble our own human predicament?</p>
<p>Westworld arrives at a time when people are already worried about the real-world impact of advances in robotics and artificial intelligence. Physicist <a href="https://theconversation.com/is-stephen-hawking-right-could-ai-lead-to-the-end-of-humankind-34967">Stephen Hawking</a> and technologist Elon Musk are among the powerful and respected voices to have expressed concern about <a href="https://theconversation.com/artificial-intelligence-can-we-keep-it-in-the-box-8541">allowing the AI genie to escape the bottle</a>. Westworld’s contribution to the expanding canon of science fiction dystopias will do nothing to quell such fears. Channelling Shakespeare’s King Lear, a malfunctioning robot warns us in chilling terms: “I shall have such revenges on you both. The things I will do, what they are, yet I know not. But they will be the terrors of the Earth.”</p>
<p>But against these voices are other distinguished experts trying to quell the panic. For Noam Chomsky, the <a href="http://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/">intellectual godfather of modern AI</a>, all talk of matching human intelligence in the foreseeable future remains fiction, not science. One of the world’s best-known roboticists, Rodney Brooks has called on us to relax: <a href="http://www.rethinkrobotics.com/blog/artificial-intelligence-tool-threat/">AI is just a tool</a>, not a threat.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/0kICLG4Zg8s?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>As a neuroscientist and roboticist, I agree that we are far from being able to replicate human intelligence in robot form. Our current systems are too simple, probably by several orders of magnitude. Building human-level AI is <a href="https://theconversation.com/no-need-to-panic-artificial-intelligence-has-yet-to-create-a-doomsday-machine-35148">extremely hard</a>; as Brooks says, we are just at the beginning of a very long road. But I see the path along which we are developing AI as <a href="https://theconversation.com/super-intelligent-machines-arent-to-be-feared-15709">one of symbiosis</a>, in which we can use robots to benefit society and exploit advances in artificial intelligence to boost our own biological intelligence.</p>
<h2>More than just a tool</h2>
<p>Nevertheless, in recent years the robots and AI are “<a href="http://www.abrg.group.shef.ac.uk/!DATA/attachment/0351.Prescott_Robots_Are_NoT_Just_Tools.pdf">just tools</a>” line of argument has begun to frustrate me. Partly because it has failed to calm the disquiet around AI, and partly because there are good reasons why these technologies are different from others in the past.</p>
<p>Even if robots are just tools, people will see them as more than that. It seems natural for people to respond to robots – even some of the <a href="https://www.newscientist.com/article/dn28293-not-like-us-how-should-we-treat-the-robots-we-live-alongside/">more simple, non-human robots</a> we have today – as though they have goals and intentions. It may be an innate tendency of our profoundly social human minds to see entities that act intelligently in this way. More importantly, people may see robots as having psychological properties such as the ability to experience suffering. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/140209/original/image-20161004-20221-1wvmkft.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/140209/original/image-20161004-20221-1wvmkft.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=379&fit=crop&dpr=1 600w, https://images.theconversation.com/files/140209/original/image-20161004-20221-1wvmkft.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=379&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/140209/original/image-20161004-20221-1wvmkft.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=379&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/140209/original/image-20161004-20221-1wvmkft.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=476&fit=crop&dpr=1 754w, https://images.theconversation.com/files/140209/original/image-20161004-20221-1wvmkft.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=476&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/140209/original/image-20161004-20221-1wvmkft.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=476&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Things will only get more complicated as robots look more life-like.</span>
<span class="attribution"><span class="source">Juxi</span></span>
</figcaption>
</figure>
<p>It may be difficult to persuade them to see otherwise, particularly if we continue to make robots more life-like. If so, we may have to <a href="https://theconversation.com/chappie-suggests-its-time-to-think-about-the-rights-of-robots-37955">adapt our ethical frameworks</a> to take this into account. For instance, we might consider violence towards a robot as wrong, even though the suffering is imagined rather than real. Indeed, faced with violence towards a robot some people <a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2044797">show this sort of ethical response spontaneously</a>. We will have to deal with these issues as we <a href="https://www.futurelearn.com/courses/robotic-future">learn to live with robots</a>.</p>
<p>As AI and robot technology becomes more complex, robots may come to have interesting psychological properties that make them more than just tools. The fictional robots of Westworld are clearly in this category, but already <a href="https://www.newscientist.com/article/mg22530130.400-me-myself-and-icub-meet-the-robot-with-a-self/">real robots are being developed</a> that have artificial drives and motivations, that are aware of their own bodies as distinct from the rest of the world, that are equipped with internal models of themselves and others as social entities, and that are able to think about their own past and future. </p>
<p>These are not properties that we find in drills and screwdrivers. They are embryonic psychological capacities that, so far, have only been found in living, sentient entities such as humans and animals. Stories such as Westworld remind us that as we progress toward ever more sophisticated AI, we should consider that this path might lead us both to machines that are more like us, and to seeing ourselves as more like machines.</p><img src="https://counter.theconversation.com/content/66434/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tony Prescott receives funding from the European Projects WYSIWYD (What You Say is What You Did, project no. 612139), EASEL (Expressive Agents for Symbiotic Intelligence and Learning, project no. 611971) and the European FET Flagship Programme through the Human Brain Project (project no. 720270). He is affiliated with the University of Sheffield and with Consequential Robotics Ltd.</span></em></p>New HBO series reimagines a group of life-like robots programmed with hope but marred in violence. They might be more human than we think.Tony Prescott, Professor of Cognitive Neuroscience and Director of the Sheffield Centre for Robotics, University of SheffieldLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/524082016-01-06T11:06:17Z2016-01-06T11:06:17ZWhy you’ll never be able to upload your brain to the cloud<figure><img src="https://images.theconversation.com/files/106123/original/image-20151215-23871-ys20qa.png?ixlib=rb-1.1.0&rect=32%2C25%2C618%2C416&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Let's do this! But what exactly do we upload?</span> <span class="attribution"><span class="source">Nicolas Rougier</span>, <span class="license">Author provided</span></span></figcaption></figure><p>In the <a href="http://theconversation.com/silicon-soul-the-vain-dream-of-electronic-immortality-52368">first article</a> in this series, we saw how the mind and body literally cannot be separated, and also why robotics isn’t capable of replicating either one.</p>
<p>But let’s assume that we’ve solved the problems of sensors and muscles and all the rest, and accept that the uploaded brain won’t truly reflect our mind. Now comes the big challenge: uploading the brain. But what is a brain exactly? This term usually refers to the cortex and possibly some subcortical structures, including the amygdala, hippocampus and basal ganglia. But the central nervous system is actually made of several other structures that are no less critical, including the cerebellum, thalamus, hypothalamus, medulla and brain stem.</p>
<h2>Making the connections</h2>
<p>If we consider the whole central nervous system, we are facing an average of <a href="http://journal.frontiersin.org/article/10.3389/neuro.09.031.2009/full">86 billion neurons</a>, and each of these neurons contacts an average of 10,000 other neurons, representing a grand total of approximately 860 billion connections. This is really huge. So exactly what do we have to upload into the computer? The type, the size and the geometry of each neuron? Its current membrane potential? The size and position of the axon and its state of <a href="http://www.brainfacts.org/brain-basics/neuroanatomy/articles/2015/myelin/">myelination</a>? The complete geometry of the dendritic tree? The location of the various ion pumps? The number, the position and the state of the different neuro-mediators? Any of these could be critical, and they can only be taken into account in state-of-the-art computer models (and for a few neurons only). The problem is that we do not know exactly what it is that makes us who we are and different from anyone else (and I’m not even talking about learning). </p>
<p>As a fallback – and only if we had the proper tools to record each of these parameters once – we could try to transfer everything. However, this would require potentially some thousands or even millions of pieces of information for a single neuron. If you consider just the number of neurons, we would reach a figure in the zetta domain (for your information, the order is kilo, mega, giga, tera, peta, exa and zetta, multiplying by 1,000 at each step). This number is so huge that it cannot yet be manipulated as a whole by computer science. And we are talking only about the brain’s storage, because we also have to ensure that this model runs in real time, since nobody would happily accept a silicon mind that runs at reduced speed. From a purely technical perspective, we are thus very far (<em>really</em> very far) from making this to happen.</p>
<p>Worse, research indicates that Moore’s Law – which suggests that computer power doubles every 18 months – is reaching its limits, suggesting that we may never attain the necessary level of technology. The <a href="https://www.humanbrainproject.eu">Human Brain Project</a> foresaw this problem and planned from the beginning to use only simplified models of neurons and synapses. If you’re interested in more accurate models, take a look at the <a href="http://www.openworm.org">OpenWorm</a> project, which doesn’t pretend to simulate any more than a few neurons.</p>
<h2>The bird in the machine</h2>
<p>This idea of transferring one’s brain into a machine is widespread in both <a href="https://en.wikipedia.org/wiki/Axiomatic_(story_collection)">literature</a> and <a href="https://en.wikipedia.org/wiki/Transcendence_(2014_film)">cinema</a>. It has gained renewed interest with recent advances in artificial intelligence. However, there may be some confusion regarding what is actually artificial intelligence (AI) and what are its goals. </p>
<p>When media cover artificial intelligence, they generally refer to machine learning and robotics, neither of which really seeks to understand the brain or cognition (with some notable exceptions, such as the work of <a href="https://flowers.inria.fr/">Pierre-Yves Oudeyer</a>). This confusion likely stems from the fact that new algorithms have been designed that enable excellent performance on tasks that were previously thought to be reserved for humans – recognizing images, driving a car and so on.</p>
<p>But if machine learning and robotics are progressing at an amazing speed, this does not tell us anything about how the biological brain works (at least not directly). If we want to know, we have to look at neuroscience and more specifically at computational neuroscience. A parallel could be drawn between aeronautics (AI) and ornithology (neuroscience). Even though the early attempts at flying were directly inspired by the flight of birds, this was abandoned long ago in favor of the design of ever more efficient aircraft (speed, payload, etc) using techniques specific to aeronautics. To better understand birds, you must turn to ornithology and biology. Hence, talking about uploading a brain to a computer because of the progress of AI makes as much sense as gluing feathers on an airplane and pretending it’s an artificial bird.</p>
<p>No one knows if it will ever be possible to “upload a brain to a computer.” But what is certain today is that in the current state of science, this statement makes no sense and will remain so without a major epistemological breakthrough in our understanding of the brain and how it works.</p><img src="https://counter.theconversation.com/content/52408/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nicolas P. Rougier ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>The endeavor assumes that computers could manage billions of billions of cerebral connections. Alas, that’s not happening anytime soon.Nicolas P. Rougier, Chargé de Recherche, InriaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/523682016-01-05T11:09:42Z2016-01-05T11:09:42ZSilicon soul: the vain dream of electronic immortality<figure><img src="https://images.theconversation.com/files/105932/original/image-20151215-23166-1593knw.png?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Our bodies set some requirements for what it would mean to upload our brains.</span> <span class="attribution"><span class="source">Nicolas Rougier</span>, <span class="license">Author provided</span></span></figcaption></figure><blockquote>
<p>In just over 30 years, humans will be able to upload their entire minds to computers and become digitally immortal. – Ray Kurzweil, Global Futures 2045 International Congress (2013)</p>
</blockquote>
<p>Without even considering the ethical, philosophical, social or legal scope of such a statement, it’s important to consider if it actually makes any sense. To try to give an educated guess, we have to move away from computer science and look at what biology and neuroscience can teach us.</p>
<h2>The sensible world</h2>
<p>In his book <a href="https://mitpress.mit.edu/books/being-there">Being There: Putting Brain, Body and World Together Again</a>, Andy Clark explains that:</p>
<blockquote>
<p>The biological mind is, first and foremost, an organ for controlling the biological body. Minds make motions, and they must make them fast – before the predator catches you, or before your prey gets away from you. Minds are not disembodied logical reasoning devices.</p>
</blockquote>
<p>To better understand this assertion, it’s essential to know that our bodies are literally covered with sensors – chemical, mechanical, visual, thermal, proprioceptive (perception of the body), noniceptive (perception of pain). All of them inform the brain about the outside world (exteroception) and the inner world (interoception), allowing it to regulate the body. The majority of our brain is actually dedicated to the processing of sensory information, and the largest part of that is devoted to visual information, occupying the entire occipital lobe and large parts of the temporal and parietal lobes. We are mostly visual beings who, incidentally, think.</p>
<p>If someone wants to “upload his entire mind to a computer,” the problem of sensors must be solved. A quick and dirty solution could be to not worry about them and just pretend that all sensory neurons would remain silent forever. However, 50 years ago, <a href="https://en.wikipedia.org/wiki/Donald_O._Hebb">Donald O. Hebb</a> conducted a series of experiments to study the effects of <a href="http://psycnet.apa.org/psycinfo/1958-00206-001">sensory deprivation</a>. He generously paid students to recline 24/7, taking care to deprive them of most of their senses using glasses, helmets, gloves and so on. The majority of the students abandoned the experiment after two or three days because they were no longer able to develop coherent thoughts and began to suffer from auditory and visual hallucinations. The experiences evoked considerable interest from the CIA (which financed the original study) and they later “improved” the process up to the point where it was transformed into an instrument of <a href="http://original.antiwar.com/engelhardt/2009/06/07/pioneers-of-torture/">psychological torture</a>.</p>
<h2>The body electric</h2>
<p>Consequently, if we want to “upload our brain” without going insane, it’s imperative for the uploaded brain to be connected to an artificial body that can perceive the outside world and act on it. But what kind of artificial body do we have today? Robotic bodies where retinas are replaced by cameras and muscles by motors? To some extent, yes, but this solution would be only a pale replica, far from the complexity and intelligence of the human body, as nicely explained by Rolf Pfeiffer and Josh Bongard’s book <a href="https://mitpress.mit.edu/books/how-body-shapes-way-we-think">How the Body Shapes the Way We Think</a>.</p>
<p>During childhood, a brain learns through experience to control its body and to leverage its specifics. For example, consider the fingertips, which are sufficiently soft and sensitive that we can easily grasp small objects. There’s no need for the brain to send a precise command – the intelligence is in the body itself. Imagine trying to do the same thing with thimbles over each finger, and you will understand how your body automatically solves a number of problems all by itself.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=267&fit=crop&dpr=1 600w, https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=267&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=267&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=336&fit=crop&dpr=1 754w, https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=336&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/102512/original/image-20151119-18421-1ag03wp.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=336&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Diagram of the retina, in its sensory complexity.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Retina-diagram.svg">Cajal</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>What about the artificial eyes we’d need? Even though high-resolution cameras now exist, the eyes we’re born with each have approximately five million cones and 100 millions rods, plus the various “preprocessing” stages, from the horizontal, bipolar, amacrine and ganglion cells. We are indeed very far from being able to reproduce a full artificial retina, even though some <a href="http://www.institut-vision.org">amazing research</a> in Paris has succeeded in helping the vision-impaired see again.</p>
<p>As a first step, we could therefore use those simplified robotic bodies with their reduced sensory and motor skills. Would it affect our minds? Yes. Our cognition depends on the interactions we have with the world, and this interaction is conveyed by both our perceptions and our actions. If you change them, you also change the sensory experience of the world as well as its underlying logic. Cognition is embodied.</p>
<p><em>We further explore this question in the <a href="https://theconversation.com/why-youll-never-be-able-to-upload-your-brain-to-the-cloud-52408">second part</a> of this article.</em></p><img src="https://counter.theconversation.com/content/52368/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nicolas P. Rougier ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>Uploading one’s mind to a computer in order to attain digital immortality has long been the fantasy of geeks and billionaires. So what’s stopping us?Nicolas P. Rougier, Chargé de Recherche, InriaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/85412012-08-05T20:18:03Z2012-08-05T20:18:03ZArtificial intelligence – can we keep it in the box?<figure><img src="https://images.theconversation.com/files/13665/original/q9ycgyyb-1343711877.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">"We should stop treating intelligent machines as the stuff of science fiction."</span> <span class="attribution"><span class="source">Cea.</span></span></figcaption></figure><p>We know how to deal with suspicious packages – as carefully as possible! These days, we let <a href="http://www.army.mod.uk/equipment/engineering/18825.aspx">robots</a> take the risk. But what if the robots <em>are</em> the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?</p>
<h2>Exploding intelligence?</h2>
<p>Asked whether there will ever be computers as smart as people, the US mathematician and sci-fi author Vernor Vinge <a href="http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf">replied</a>: “Yes, but only briefly”. </p>
<p>He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly. Vinge christened this sudden explosion of intelligence the “<a href="http://www-rohan.sdsu.edu/faculty/vinge/misc/WER2.html">technological singularity</a>”, and thought that it was unlikely to be good news, from a human point of view. </p>
<p>Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb, perhaps)? </p>
<h2>AI as a low achiever</h2>
<p>Optimists sometimes take comfort from the fact the field of AI has very chequered past. Periods of exuberance and hype have been mixed with so-called “<a href="http://en.wikipedia.org/wiki/AI_winter">AI winters</a>” – times of reduced funding and interest, after promised capabilities fail to materialise. </p>
<p>Some people point to this as evidence machines are never likely to reach human levels of intelligence, let alone to exceed them. Others point out that the same could have been said about heavier-than-air flight. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/13668/original/6rn6whj5-1343712776.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/13668/original/6rn6whj5-1343712776.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/13668/original/6rn6whj5-1343712776.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/13668/original/6rn6whj5-1343712776.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/13668/original/6rn6whj5-1343712776.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/13668/original/6rn6whj5-1343712776.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/13668/original/6rn6whj5-1343712776.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/13668/original/6rn6whj5-1343712776.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">ra1000</span></span>
</figcaption>
</figure>
<p>The history of that technology, too, is littered with naysayers (some of whom refused to believe reports of the Wright brothers’ success, apparently). For human-level intelligence, as for heavier-than-air flight, naysayers need to confront the fact nature has managed the trick: think brains and birds, respectively. </p>
<p>A good naysaying argument needs a reason for thinking that human technology can never reach the bar in terms of AI.</p>
<p>Pessimism is much easier. For one thing, we know nature managed to put human-level intelligence in skull-sized boxes, and that some of those skull-sized boxes are making progress in figuring out how nature does it. This makes it hard to maintain that the bar is permanently out of reach of artificial intelligence – on the contrary, we seem to be improving our understanding of what it would take to get there.</p>
<h2>Moore’s Law and narrow AI</h2>
<p>On the technological side of the fence, we seem to be making progress towards the bar, both in hardware and in software terms. In the hardware arena, <a href="http://en.wikipedia.org/wiki/Moore's_law">Moore’s law</a>, which predicts that the amount of computing power we can fit on a chip doubles every two years, shows little sign of slowing down. </p>
<p>In the software arena, people debate the possibility of “<a href="https://en.wikipedia.org/wiki/Strong_AI">strong AI</a>” (artificial intelligence that matches or exceeds human intelligence) but the caravan of “<a href="http://www.articledashboard.com/Article/Robotics-Artificial-Intelligence-and-the-Possibilities/741040">narrow AI</a>” (AI that’s limited to particular tasks) moves steadily forward. One by one, computers take over domains that were previously considered off-limits to anything but human intellect and intuition. </p>
<p>We now have machines that have trumped human performance in such domains as chess, trivia games, flying, driving, financial trading, face, speech and handwriting recognition – the list goes on. </p>
<p>Along with the continuing progress in hardware, these developments in narrow AI make it harder to defend the view that computers will never reach the level of the human brain. A steeply rising curve and a horizontal line seem destined to intersect! </p>
<h2>What’s so bad about intelligent helpers?</h2>
<p>Would it be a bad thing if computers were as smart as humans? The list of current successes in narrow AI might suggest pessimism is unwarranted. Aren’t these applications mostly useful, after all? A little damage to Grandmasters’ egos, perhaps, and a few glitches on financial markets, but it’s hard to see any sign of impending catastrophe on the list above. </p>
<p>That’s true, say the pessimists, but as far as our future is concerned, the narrow domains we yield to computers are not all created equal. Some areas are likely to have a much bigger impact than others. (Having robots drive our cars may completely rewire our economies in the next decade or so, for example). </p>
<p>The greatest concerns stem from the possibility that computers might take over domains that are critical to controlling the speed and direction of technological progress itself.</p>
<h2>Software writing software?</h2>
<p>What happens if computers reach and exceed human capacities to write computer programs? The first person to consider this possibility was the Cambridge-trained mathematician <a href="http://en.wikipedia.org/wiki/I._J._Good">I J Good</a> (who worked with <a href="https://theconversation.com/calls-for-a-posthumous-pardon-but-who-was-alan-turing-4773">Alan Turing</a> code-breaking at Bletchley Park during the second world war, and later on early computers at the University of Manchester). </p>
<p>In 1965 Good observed that having intelligent machines develop even more intelligent machines would result in an “<a href="http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf">intelligence explosion</a>”, which would leave the human levels of intelligence far behind. He called the creation of such machine “our last invention” – which is unlikely to be “Good” news, the pessimists add!</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/13669/original/hfx938cc-1343712834.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/13669/original/hfx938cc-1343712834.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/13669/original/hfx938cc-1343712834.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/13669/original/hfx938cc-1343712834.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/13669/original/hfx938cc-1343712834.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/13669/original/hfx938cc-1343712834.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=565&fit=crop&dpr=1 754w, https://images.theconversation.com/files/13669/original/hfx938cc-1343712834.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=565&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/13669/original/hfx938cc-1343712834.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=565&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">FlySi</span></span>
</figcaption>
</figure>
<p>In the above scenario, the moment computers become better programmers than humans marks the point in history where the speed of technological progress shifts from the speed of human thought and communication to the speed of silicon. This is a version of Vernor Vinge’s “technological singularity” – beyond this point, the curve is driven by new dynamics and the future becomes radically unpredictable, as Vinge had in mind.</p>
<h2>Not just like us, but smarter!</h2>
<p>It would be comforting to think that any intelligence that surpassed our own capabilities would be like us, in important respects – just a lot cleverer. But here, too, the pessimists see bad news: they point out that almost all the things we humans value (love, happiness, even survival) are important to us because we have particular evolutionary history – a history we share with higher animals, but not with computer programs, such as artificial intelligences. </p>
<p>By default, then, we seem to have no reason to think that intelligent machines would share our values. The good news is that we probably have no reason to think they would be hostile, as such: hostility, too, is an animal emotion. </p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/13667/original/vbtyzr7n-1343712745.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/13667/original/vbtyzr7n-1343712745.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/13667/original/vbtyzr7n-1343712745.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/13667/original/vbtyzr7n-1343712745.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/13667/original/vbtyzr7n-1343712745.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/13667/original/vbtyzr7n-1343712745.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/13667/original/vbtyzr7n-1343712745.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/13667/original/vbtyzr7n-1343712745.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">JD Hancock</span></span>
</figcaption>
</figure>
<p>The bad news is that they might simply be indifferent to us – they might care about us as much as we care about the bugs on the windscreen. </p>
<p>People sometimes complain that corporations are psychopaths, if they are not sufficiently reined in by human control. The pessimistic prospect here is that artificial intelligence might be similar, except much much cleverer and much much faster.</p>
<h2>Getting in the way</h2>
<p>By now you see where this is going, according to this pessimistic view. The concern is that by creating computers that are as intelligent as humans (at least domains that matter to technological progress), we risk yielding control over the planet to intelligences that are simply indifferent to us, and to things that we consider valuable – things such as life and a sustainable environment. </p>
<p>If that sounds far-fetched, the pessimists say, just ask gorillas how it feels to compete for resources with the most intelligent species – the reason they are going extinct is not (on the whole) because humans are actively hostile towards them, but because we control the environment in ways that are detrimental to their continuing survival.</p>
<h2>How much time do we have?</h2>
<p>It’s hard to say how urgent the problem is, even if pessimists are right. We don’t yet know exactly what makes human thought different from current generation of machine learning algorithms, for one thing, so we don’t know the size of the gap between the fixed bar and the rising curve. </p>
<p>But some trends point towards the middle of the present century. In <a href="http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf">Whole Brain Emulation: A Roadmap</a>, the Oxford philosophers Anders Sandberg and Nick Bostrom suggest our ability to scan and emulate human brains might be sufficient to replicate human performance in silicon around that time.</p>
<h2>“The pessimists might be wrong!”</h2>
<p>Of course – making predictions is difficult, as they say, especially about the future! But in ordinary life we take uncertainties very seriously, when a lot is at stake. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/13670/original/3zm682dr-1343713413.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/13670/original/3zm682dr-1343713413.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/13670/original/3zm682dr-1343713413.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=839&fit=crop&dpr=1 600w, https://images.theconversation.com/files/13670/original/3zm682dr-1343713413.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=839&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/13670/original/3zm682dr-1343713413.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=839&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/13670/original/3zm682dr-1343713413.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1055&fit=crop&dpr=1 754w, https://images.theconversation.com/files/13670/original/3zm682dr-1343713413.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1055&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/13670/original/3zm682dr-1343713413.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1055&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Sebastianlund</span></span>
</figcaption>
</figure>
<p>That’s why we use expensive robots to investigate suspicious packages, after all (even when we know that only a very tiny proportion of them will turn out to be bombs).</p>
<p>If the future of AI is “explosive” in the way described here, it could be the last bomb the human species ever encounters. A suspicious attitude would seem more than sensible, then, even if we had good reason to think the risks are very small. </p>
<p>At the moment, even that degree of reassurance seems out of our reach – we don’t know enough about the issues to estimate the risks with any high degree of confidence. (Feeling optimistic is not the same as having good reason to be optimistic, after all).</p>
<h2>What to do?</h2>
<p>A good first step, we think, would be to stop treating intelligent machines as the stuff of science fiction, and start thinking of them as a part of the reality that we or our descendants may actually confront, sooner or later. </p>
<p>Once we put such a future on the agenda we can begin some serious research about ways to ensure out-sourcing intelligence to machines would be safe and beneficial, from our point of view. </p>
<p>Perhaps the best cause for optimism is that, unlike ordinary ticking parcels, the future of AI is still being assembled, piece by piece, by hundreds of developers and scientists throughout the world. </p>
<p>The future isn’t yet fixed, and there may well be things we can do now to make it safer. But this is only a reason for optimism if we take the trouble to make it one, by investigating the issues and thinking hard about the safest strategies. </p>
<p>We owe it to our grandchildren – not to mention our ancestors, who worked so hard for so long to get us this far! – to make that effort. </p>
<p><br>
<br></p>
<p><strong>Further information:</strong><br>
For a thorough and thoughtful analysis of this topic, we recommend <a href="http://consc.net/papers/singularity.pdf">The Singularity: A Philosophical Analysis</a> by the Australian philosopher <a href="http://consc.net/chalmers/">David Chalmers</a>. Jaan Tallinn’s recent public lecture <a href="http://sydney.edu.au/sydney_ideas/lectures/2012/jaan.tallinn.shtml">The Intelligence Stairway</a> is available as a podcast or on YouTube via <a href="http://sydney.edu.au/sydney_ideas/index.shtml">Sydney Ideas</a>.</p>
<p><br></p>
<p><strong>The Centre for the Study of Existential Risk</strong><br> The authors are the co-founders, together with the eminent British astrophysicist, <a href="http://en.wikipedia.org/wiki/Martin_Rees,_Baron_Rees_of_Ludlow">Lord Martin Rees</a>, of a new project to establish a Centre for the Study of Existential Risk (CSER) at the University of Cambridge. </p>
<p>The Centre will support research to identify and mitigate catastrophic risk from developments in human technology, including AI – further details at <a href="http://cser.org/">CSER.ORG</a>.</p><img src="https://counter.theconversation.com/content/8541/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Huw Price has received funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Jaan Tallinn is one of the founders of Skype. He does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article. He is a donor to the Singularity Institute.</span></em></p>We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial…Huw Price, Bertrand Russell Professor of Philosophy, University of CambridgeJaan Tallinn, Invited User, University of CambridgeLicensed as Creative Commons – attribution, no derivatives.