tag:theconversation.com,2011:/ca/topics/moores-law-526/articlesMoores Law – The Conversation2020-03-26T10:49:49Ztag:theconversation.com,2011:article/1345722020-03-26T10:49:49Z2020-03-26T10:49:49ZReality of exponential growth of COVID-19 shows South Africa’s lockdown is right<figure><img src="https://images.theconversation.com/files/322967/original/file-20200325-168889-ozhpf4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">South African President Cyril Rampahosa, centre, ordered a 21-day lockdown.</span> <span class="attribution"><span class="source">GCIS/Flickr</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p><a href="https://mathworld.wolfram.com/ExponentialGrowth.html">Exponential growth</a> is a hard thing for the human mind to grasp. In nature, it seldom comes at us with the rapidity with which the COVID-19 pandemic is unfolding. This leads people and governments to make suboptimal decisions, like waiting until the disaster is self-evident before reacting.</p>
<p>Take compound interest, for example. Every time you look at an account that attracts interest, the interest is a <em>little</em> more. If you stop looking for a long time, it’s a <em>lot</em> more. This is exponential growth: the new bit you add is the same fraction each time but it is a fraction of the new, bigger amount.</p>
<p>South Africa’s President Cyril Ramaphosa has announced a <a href="https://www.aljazeera.com/news/2020/03/coronavirus-lockdown-south-africa-orders-three-week-restrictions-200324080325961.html">21-day lockdown</a>, in a much more complete form than any developed country has contemplated. Some have reacted with dismay, arguing that this approach will <a href="https://theconversation.com/covid-19-the-cure-could-be-worse-than-the-disease-for-south-africa-134436">cripple the economy</a>. Others have turned to <a href="https://www.sciencealert.com/the-new-coronavirus-isn-t-like-the-flu-but-they-have-one-big-thing-in-common">now-debunked</a> ideas like “it’s no worse than the flu”. Still others fear that a lockdown is an <a href="https://www.theguardian.com/commentisfree/2020/mar/18/politics-public-covid-19-tobacco-johnson">attack on rights</a>; but this is like arguing that the state has no right to ban dangerous driving. South Africa has one of the strongest rights-based constitutions in the world. No one is rushing to court to overturn the plan.</p>
<p>As a computer scientist who also works in bioinformatics, I’m very familiar with exponential growth. It is this background that makes it clear to me that Ramaphosa has done the right thing.</p>
<h2>Good growth</h2>
<p>In 1965 engineer Gordon Moore predicted that computing power would continue to grow exponentially while the monetary cost of that power stayed consistent: more power, for the same or less money. <a href="https://www.britannica.com/technology/Moores-law">Moore’s Law</a> has held true for decades.</p>
<p>In the early 1990s, I predicted that IBM would be in trouble when microprocessors overtook traditional mainframes in speed because they were on a faster improvement trajectory. Right on schedule, IBM set a new <a href="http://tech.mit.edu/V112/N66/ibm.66w.html">record for losses</a> in 1993.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/322537/original/file-20200324-155631-1w4stxa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/322537/original/file-20200324-155631-1w4stxa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/322537/original/file-20200324-155631-1w4stxa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/322537/original/file-20200324-155631-1w4stxa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/322537/original/file-20200324-155631-1w4stxa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/322537/original/file-20200324-155631-1w4stxa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/322537/original/file-20200324-155631-1w4stxa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/322537/original/file-20200324-155631-1w4stxa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Sequencing Cost per Genome August.</span>
<span class="attribution"><span class="source">National Human Genome Research Institute</span></span>
</figcaption>
</figure>
<p>Advances in gene sequencing are even more spectacular; from the initial human genome project, which cost nearly $3-billion over about 10 years, we can today sequence thousands of genomes for research purposes.</p>
<p>That’s the good kind – better stuff for less.</p>
<h2>COVID-19 and exponential growth</h2>
<p>Now, consider exponential growth when it comes to COVID-19. Let’s start with South Africa.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/322609/original/file-20200324-155658-f1ahvd.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/322609/original/file-20200324-155658-f1ahvd.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/322609/original/file-20200324-155658-f1ahvd.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=212&fit=crop&dpr=1 600w, https://images.theconversation.com/files/322609/original/file-20200324-155658-f1ahvd.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=212&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/322609/original/file-20200324-155658-f1ahvd.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=212&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/322609/original/file-20200324-155658-f1ahvd.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=267&fit=crop&dpr=1 754w, https://images.theconversation.com/files/322609/original/file-20200324-155658-f1ahvd.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=267&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/322609/original/file-20200324-155658-f1ahvd.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=267&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">South African COVID-19 Trend.</span>
<span class="attribution"><span class="source">Wikipedia own graph</span></span>
</figcaption>
</figure>
<p>A log scale flattens out an exponential trend and makes it easier to see how it is progressing. In South Africa’s case, since it is early days, numbers are mixed with discoveries of imported cases so the underlying trend is not so clear. Data are only available for 5–24 March (the last was the date of writing, so the number could go up).</p>
<p>The <em>Mail & Guardian</em> newspaper has published <a href="https://mg.co.za/article/2020-03-19-projections-vital-to-deal-with-crisis/">projections</a> that show exponential growth which would put the country at over 4,000 cases by 1 April. Where South Africa is as March draws to a close, according to its report, is where Italy was three weeks ago.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/322546/original/file-20200324-155666-1odomd8.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/322546/original/file-20200324-155666-1odomd8.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/322546/original/file-20200324-155666-1odomd8.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=201&fit=crop&dpr=1 600w, https://images.theconversation.com/files/322546/original/file-20200324-155666-1odomd8.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=201&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/322546/original/file-20200324-155666-1odomd8.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=201&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/322546/original/file-20200324-155666-1odomd8.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=253&fit=crop&dpr=1 754w, https://images.theconversation.com/files/322546/original/file-20200324-155666-1odomd8.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=253&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/322546/original/file-20200324-155666-1odomd8.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=253&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">US COVID-19 Trend.</span>
<span class="attribution"><span class="source">Worldometer numbers; graph by myself</span></span>
</figcaption>
</figure>
<p>Let’s look at how the number of COVID-19 cases has increased in the US to get an idea of why this is such a dangerous pandemic, since it is further along with the trend. The <a href="https://www.worldometers.info/coronavirus/">Worldometer</a> site provides convenient graphics to see the issue. You can view their graphs on a linear or log scale. I redrew their graphs for the US for the period of 15 February to 22 March 2020.</p>
<p>Look at the scale on the log₁₀ graph. The top number on the vertical axis is 5. That represents 10⁵, or 100,000. The two dotted lines illustrate how a small change in the trend will take you to 100,000 a few days sooner or later. But from the current high point of the graph, 43,734, it is scarily close.</p>
<p>It is this scarily fast acceleration that takes human ability to process information by surprise.</p>
<p>The day too many patients arrive for a country’s intensive care units is like the first ripple before a tsunami. The next day, the ripple is bigger. Then bigger still. It doesn’t let up. When it becomes clear that it is a full-blown tsunami, the next wave is even bigger.</p>
<p>This is the tyranny of exponential growth: societies with the best health systems are getting hit worst because the cliff they fall off when it overwhelms them is that much higher.</p>
<p>For this reason, waiting until there is a crisis is very much the wrong strategy. The day your capacity to treat patients is overwhelmed, it will stay that way as long as you cannot almost totally stop the growth in number of cases.</p>
<h2>Economy or lives</h2>
<p>Eventually numbers <em>will</em> drop as sufficient of the population develop immunity (herd immunity is <a href="https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)30669-3/fulltext">estimated at 60%</a>). But until then, the medical system will remain overwhelmed, leading to many unnecessary deaths.</p>
<p>This is why the notion that you have a binary choice between the economy and the medically optimal strategy is wrong.</p>
<p>If a country cannot either stop the thing in its tracks or do what some describe as “<a href="https://theconversation.com/how-to-flatten-the-curve-of-coronavirus-a-mathematician-explains-133514">flattening the curve</a>”, it will in any case crash its economy.</p>
<p>Flattening the curve assumes a country can keep numbers below the level where medical services are overwhelmed. The risk with this strategy is overshooting – not quite flattening the curve enough. This is risky in the <a href="https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf">US or UK</a>. In a country like South Africa with very poor public health, and high rates of <a href="https://www.unaids.org/en/regionscountries/countries/southafrica">HIV</a> and <a href="https://extranet.who.int/sree/Reports?op=Replet&name=/WHO_HQ_Reports/G2/PROD/EXT/TBCountryProfile&ISO2=ZA&outtype=PDF">tuberculosis</a>, the risk gets even higher.</p>
<p>South Africa’s extreme strategy of a 21-day lockdown is justifiable on medical grounds. What about the economy?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/all-world-leaders-face-mega-covid-19-crises-how-ramaphosa-is-stacking-up-134682">All world leaders face mega COVID-19 crises: how Ramaphosa is stacking up</a>
</strong>
</em>
</p>
<hr>
<p>Some have argued against extreme lockdowns on the basis that it will trash the global economy. But countries that have not taken this approach are already paying an extreme price – not only in loss of life but economically.</p>
<p>When the world comes out of this, the country that has preserved its human capital the best will come out ahead. Its leaders will be able to look their people in the eye and honestly say “I did my best”.</p><img src="https://counter.theconversation.com/content/134572/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Philip Machanick does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The notion that there is a binary choice between the economy and the medically optimal strategy is wrong.Philip Machanick, Associate Professor of Computer Science, Rhodes UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/524082016-01-06T11:06:17Z2016-01-06T11:06:17ZWhy you’ll never be able to upload your brain to the cloud<figure><img src="https://images.theconversation.com/files/106123/original/image-20151215-23871-ys20qa.png?ixlib=rb-1.1.0&rect=32%2C25%2C618%2C416&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Let's do this! But what exactly do we upload?</span> <span class="attribution"><span class="source">Nicolas Rougier</span>, <span class="license">Author provided</span></span></figcaption></figure><p>In the <a href="http://theconversation.com/silicon-soul-the-vain-dream-of-electronic-immortality-52368">first article</a> in this series, we saw how the mind and body literally cannot be separated, and also why robotics isn’t capable of replicating either one.</p>
<p>But let’s assume that we’ve solved the problems of sensors and muscles and all the rest, and accept that the uploaded brain won’t truly reflect our mind. Now comes the big challenge: uploading the brain. But what is a brain exactly? This term usually refers to the cortex and possibly some subcortical structures, including the amygdala, hippocampus and basal ganglia. But the central nervous system is actually made of several other structures that are no less critical, including the cerebellum, thalamus, hypothalamus, medulla and brain stem.</p>
<h2>Making the connections</h2>
<p>If we consider the whole central nervous system, we are facing an average of <a href="http://journal.frontiersin.org/article/10.3389/neuro.09.031.2009/full">86 billion neurons</a>, and each of these neurons contacts an average of 10,000 other neurons, representing a grand total of approximately 860 billion connections. This is really huge. So exactly what do we have to upload into the computer? The type, the size and the geometry of each neuron? Its current membrane potential? The size and position of the axon and its state of <a href="http://www.brainfacts.org/brain-basics/neuroanatomy/articles/2015/myelin/">myelination</a>? The complete geometry of the dendritic tree? The location of the various ion pumps? The number, the position and the state of the different neuro-mediators? Any of these could be critical, and they can only be taken into account in state-of-the-art computer models (and for a few neurons only). The problem is that we do not know exactly what it is that makes us who we are and different from anyone else (and I’m not even talking about learning). </p>
<p>As a fallback – and only if we had the proper tools to record each of these parameters once – we could try to transfer everything. However, this would require potentially some thousands or even millions of pieces of information for a single neuron. If you consider just the number of neurons, we would reach a figure in the zetta domain (for your information, the order is kilo, mega, giga, tera, peta, exa and zetta, multiplying by 1,000 at each step). This number is so huge that it cannot yet be manipulated as a whole by computer science. And we are talking only about the brain’s storage, because we also have to ensure that this model runs in real time, since nobody would happily accept a silicon mind that runs at reduced speed. From a purely technical perspective, we are thus very far (<em>really</em> very far) from making this to happen.</p>
<p>Worse, research indicates that Moore’s Law – which suggests that computer power doubles every 18 months – is reaching its limits, suggesting that we may never attain the necessary level of technology. The <a href="https://www.humanbrainproject.eu">Human Brain Project</a> foresaw this problem and planned from the beginning to use only simplified models of neurons and synapses. If you’re interested in more accurate models, take a look at the <a href="http://www.openworm.org">OpenWorm</a> project, which doesn’t pretend to simulate any more than a few neurons.</p>
<h2>The bird in the machine</h2>
<p>This idea of transferring one’s brain into a machine is widespread in both <a href="https://en.wikipedia.org/wiki/Axiomatic_(story_collection)">literature</a> and <a href="https://en.wikipedia.org/wiki/Transcendence_(2014_film)">cinema</a>. It has gained renewed interest with recent advances in artificial intelligence. However, there may be some confusion regarding what is actually artificial intelligence (AI) and what are its goals. </p>
<p>When media cover artificial intelligence, they generally refer to machine learning and robotics, neither of which really seeks to understand the brain or cognition (with some notable exceptions, such as the work of <a href="https://flowers.inria.fr/">Pierre-Yves Oudeyer</a>). This confusion likely stems from the fact that new algorithms have been designed that enable excellent performance on tasks that were previously thought to be reserved for humans – recognizing images, driving a car and so on.</p>
<p>But if machine learning and robotics are progressing at an amazing speed, this does not tell us anything about how the biological brain works (at least not directly). If we want to know, we have to look at neuroscience and more specifically at computational neuroscience. A parallel could be drawn between aeronautics (AI) and ornithology (neuroscience). Even though the early attempts at flying were directly inspired by the flight of birds, this was abandoned long ago in favor of the design of ever more efficient aircraft (speed, payload, etc) using techniques specific to aeronautics. To better understand birds, you must turn to ornithology and biology. Hence, talking about uploading a brain to a computer because of the progress of AI makes as much sense as gluing feathers on an airplane and pretending it’s an artificial bird.</p>
<p>No one knows if it will ever be possible to “upload a brain to a computer.” But what is certain today is that in the current state of science, this statement makes no sense and will remain so without a major epistemological breakthrough in our understanding of the brain and how it works.</p><img src="https://counter.theconversation.com/content/52408/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nicolas P. Rougier ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>The endeavor assumes that computers could manage billions of billions of cerebral connections. Alas, that’s not happening anytime soon.Nicolas P. Rougier, Chargé de Recherche, InriaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/445112015-07-20T04:16:09Z2015-07-20T04:16:09ZMoore’s Law is 50 years old but will it continue?<figure><img src="https://images.theconversation.com/files/88941/original/image-20150720-21047-q7is1z.png?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The rate of growth in computing power predicted by Gordon Moore (pictured) could be slowing.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/jurvetson/16318918399/">Flickr/Steve Jurvetson</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>It’s been 50 years since <a href="http://www.britannica.com/biography/Gordon-E-Moore">Gordon Moore</a>, one of the founders of the microprocessor company Intel, gave us <a href="http://arstechnica.com/gadgets/2008/09/moore/">Moore’s Law</a>. This says that the complexity of computer chips ought to double roughly every two years. </p>
<p>Now the current CEO of Intel, Brian Krzanich, is saying the days of Moore’s Law <a href="http://www.ft.com/intl/cms/s/0/36b722bc-2b49-11e5-8613-e7aedbb7bdb7.html#axzz3gNiPHpum">may be coming to an end</a> as the time between new innovation appears to be widening:</p>
<blockquote>
<p>The last two technology transitions have signalled that our cadence today is closer to two and a half years than two.</p>
</blockquote>
<p>So is this the end of Moore’s Law?</p>
<p>Moore’s Law has its roots in an <a href="http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=658762">article</a> by <a href="http://www.intel.com/content/www/us/en/history/museum-gordon-moore-law.html">Moore</a> written in 1965, in which he observed the complexity of component development was doubling each year. This was later modified to become:</p>
<blockquote>
<p>The number of transistors incorporated in a chip will approximately double every 24 months.</p>
</blockquote>
<p>This rate was again modified to a doubling over roughly 18 months. </p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/88947/original/image-20150720-18556-1s2dol8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/88947/original/image-20150720-18556-1s2dol8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/88947/original/image-20150720-18556-1s2dol8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=923&fit=crop&dpr=1 600w, https://images.theconversation.com/files/88947/original/image-20150720-18556-1s2dol8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=923&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/88947/original/image-20150720-18556-1s2dol8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=923&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/88947/original/image-20150720-18556-1s2dol8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1160&fit=crop&dpr=1 754w, https://images.theconversation.com/files/88947/original/image-20150720-18556-1s2dol8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1160&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/88947/original/image-20150720-18556-1s2dol8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1160&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Gordon Moore.</span>
<span class="attribution"><a class="source" href="http://www.intel.com/pressroom/kits/events/moores_law_40th/">Intel</a></span>
</figcaption>
</figure>
<p>In its 24 month guise, Moore’s Law has continued unabated for 50 years, with an overall advance of a factor of roughly 2<sup>31</sup>, or 2 billion. That means memory chips today story around 2 billion times as much data as in 1965. Or, in more general terms, computer hardware today is around 2 billion times as powerful for the same cost.</p>
<p>It is hard to comprehend Moore’s Law. Imagine airline technology advancing from 1965 to 2015 to travel nearly at the speed of light (1,080 million kph or 670 million mph), yet capacious enough to contain the entire world’s population. Or imagine the cost of a jet airliner dropping from US$100 million to one dollar. Yet even these analogies fall far short of a factor of 2 billion.</p>
<p>Moore was originally embarrassed by his eponymous “law”. This is in part because it is not at all a law in the sense a law of physics, but instead merely an observation. But on the <a href="http://www.computerhistory.org/events/video/127/">40th anniversary</a>, Intel was happy to celebrate it and Moore was pleased to note that it still seemed to be accurate.</p>
<h2>The end is nigh?</h2>
<p>A few months ago though, Moore <a href="http://www.nytimes.com/2015/05/13/opinion/thomas-friedman-moores-law-turns-50.html">observed</a>:</p>
<blockquote>
<p>The original prediction was to look at 10 years, which I thought was a stretch […] The fact that something similar is going on for 50 years is truly amazing. […] But someday it has to stop. No exponential like this goes on forever.</p>
</blockquote>
<p>There have been numerous other predictions that Moore’s Law was soon to end.</p>
<p>In 1999, physicist and best-selling author Michio Kaku <a href="http://articles.philly.com/1999-08-12/business/25485212_1_molecular-computers-optical-computers-transistors">declared</a> that the “Point One barrier” (meaning chip features 0.1 micron or 100 nanometers in size) would soon halt progress.</p>
<p>Yet the semiconductor industry sailed through the 0.1 micron level like a jetliner passing through a wispy cloud. Devices currently in production have feature sizes as small as 10 or 14 nanometers, and <a href="http://www.nytimes.com/2015/07/09/technology/ibm-announces-computer-chips-more-powerful-than-any-in-existence.html">IBM has just announced</a> chip with 7 nanometer features.</p>
<p>By comparison, a helical strand of DNA is 2.5 nanometers in diameter, thus commercial semiconductor technology is now entering the molecular and atomic realm.</p>
<h2>A speed barrier</h2>
<p>Not all is roses, though. By one measure – a processor’s <a href="http://www.pcmag.com/encyclopedia/term/39831/clock-speed">clock speed</a> – Moore’s Law has already <a href="http://www.independent.co.uk/life-style/gadgets-and-tech/news/the-end-of-moores-law-why-the-theory-that-computer-processors-will-double-in-power-every-two-years-may-be-becoming-obsolete-10394659.html">stalled</a>.</p>
<p>Today’s state-of-the-art production microprocessors typically have 3 GHz clock rates, compared with 2 GHz rates five or ten years ago – not a big improvement.</p>
<p>But the industry has simply increased the number of processor “cores” and on-chip cache memory, so that aggregate performance continues to track or exceed Moore’s Law projections. There are many, many software challenges to make sure this remains relevant.</p>
<p>Hewlett Packard Laboratories is hard at work developing new approaches for microelectronics. Its <a href="http://www.hpl.hp.com/research/about/nanotechnology.html#electronics">nanotechnology research group</a> has developed a “crossbar architecture”, a design where a set of parallel “wires” a few nanometers in width are crossed by a second set of “wires” at right angles. Where the “wires” intersect forms an electronic switch, which can be configured for either logic or memory storage use.</p>
<p>It is also investigating nanoscale photonics (light-based devices), which can be deployed either for conventional electronic devices or for emerging quantum computing devices.</p>
<h2>Moore’s Law is a gift to science</h2>
<p>Moore’s Law has been a great blessing to <a href="https://theconversation.com/make-mine-a-double-moores-law-and-the-future-of-mathematics-4957">science and mathematics research</a>. Modern laboratories are loaded with high-tech measurement and analysis devices, which become more powerful and cheaper ever year.</p>
<p>In addition, a broad range of modern science, mathematics and engineering has benefited from Moore’s Law in the form of scientific supercomputers, which are used for applications as diverse as supernova simulation and protein folding to product design and the processing of microwave background radiation from the cosmos.</p>
<p>Software running these computers has advanced abreast with Moore’s Law. </p>
<p>For example, the <a href="http://hyperphysics.phy-astr.gsu.edu/hbase/math/fft.html">fast Fourier transform</a> algorithm, which is used extensively in scientific computation, and magnetic resonance imaging (MRI), both involve substantial computation that would not be possible without Moore’s Law advances.</p>
<p>It is not entirely coincidental that both of these algorithmic advances arose roughly 50 years ago, the same time Moore’s Law was first observed.</p>
<h2>How much more for Moore’s Law?</h2>
<p>Intel’s CEO, Brian Krzanich, <a href="http://www.businessinsider.com.au/intel-ceo-brian-krzanich-suggests-moores-law-is-over-2015-7">said the company</a> would “strive to get back to two years” for innovation to keep Moore’s Law on track.</p>
<p>If Moore’s Law does continue for just two or three more decades, typical handheld devices may well exceed the human brain in intelligence. Some, such as author <a href="http://www.amazon.com/Our-Final-Invention-Artificial-Intelligence/dp/0312622376">James Barrat</a>, declare that artificially intelligent computers will be the “final invention” of mankind, after which humans may become irrelevant.</p>
<p>We do not subscribe to such pessimism. Rather we see a promising future with scientific knowledge, among other things, increasing at an exponential rate.</p>
<p>Time will tell. As physicist Richard Feynman wrote in 1959, referring to the potential for ever finer control of nature at the microscopic level, there still appears to be <a href="https://en.wikipedia.org/wiki/There%27s_Plenty_of_Room_at_the_Bottom">plenty of room at the bottom</a>.</p><img src="https://counter.theconversation.com/content/44511/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jonathan Borwein (Jon) receives funding from the Australian Research Council</span></em></p><p class="fine-print"><em><span>David H. Bailey does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The rapid advancement of computing power has followed an unusual law that was first mooted a half century ago. But are there signs things could be slowing down?Jonathan Borwein (Jon), Laureate Professor of Mathematics, University of NewcastleDavid H. Bailey, PhD; Lawrence Berkeley Laboratory (retired) and Research Fellow, University of California, DavisLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/49462012-02-02T19:22:28Z2012-02-02T19:22:28ZSuper models – using maths to mitigate natural disasters<figure><img src="https://images.theconversation.com/files/7247/original/3t429ysb-1327966354.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">We can't tame the oceans, but modelling can help us better understand them.</span> <span class="attribution"><span class="source">Badruddeen</span></span></figcaption></figure><p>Last year will go on record as one of significant natural disasters both in Australia and overseas. Indeed, the <a href="http://www.abc.net.au/news/2011-01-11/raging-floods-bear-down-on-brisbane/1901406">flooding of the Brisbane River in January</a> is still making news as the <a href="http://www.floodcommission.qld.gov.au/">Queensland floods inquiry</a> investigates whether water released from <a href="http://en.wikipedia.org/wiki/Wivenhoe_Dam">Wivenhoe Dam</a> was responsible. Water modelling is being used to <a href="http://www.theaustralian.com.au/national-affairs/elections/huge-dam-releases-caused-most-of-city-flood-study-shows/story-fnbsqt8f-1226260029718">answer the question</a>: could modelling have avoided the problem in the first place?</p>
<p>This natural disaster – as well as the <a href="https://theconversation.com/learning-from-the-japan-tsunami-340">Japanese tsunami in March</a> and the <a href="http://www.theaustralian.com.au/news/world/public-holiday-declared-as-bangkok-braces-for-new-floodwaters/story-e6frg6so-1226176876594">flooding in Bangkok in October</a> - involved the movement of fluids: water, mud or both. And all had a human cost – displaced persons, the spread of disease, disrupted transport, disrupted businesses, broken infrastructure and damaged or destroyed homes. With the planet now housing <a href="https://theconversation.com/topics/7-billion-people">7 billion people</a>, the potential for adverse humanitarian effects from natural disasters is greater than ever.</p>
<p>Here in CSIRO’s division of Mathematical and Information Sciences, we’ve been working with various government agencies (in Australia and China) to model the flow of flood waters and the debris they carry. Governments are starting to realise just how powerful computational modelling is for understanding and analysing natural disasters and how to plan for them.</p>
<p>This power is based on two things – the power of computers and the power of the <a href="http://mathworld.wolfram.com/Algorithm.html">algorithms</a> (computer processing steps) that run on the computers.</p>
<figure>
<iframe src="https://player.vimeo.com/video/35610400" width="500" height="281" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</figure>
<p>In recent years, the huge <a href="https://theconversation.com/make-mine-a-double-moores-law-and-the-future-of-mathematics-4957">increase in computer power and speed</a> coupled with advances in algorithm development has allowed mathematical modellers like us to make large strides in our research.</p>
<p>These advances have enabled us to model millions, even billions of water particles, allowing us to more accurately predict the effects of natural and man-made fluid flows, such as tsunamis, dam breaks, floods, mudslides, coastal inundation and storm surges. </p>
<p>So how does it work?</p>
<p>Well, fluids such as sea water can be represented as billions of particles moving around, filling spaces, flowing downwards, interacting with objects and in turn being interacted upon. Or they can be visualised as a mesh of the fluids’ shape.</p>
<p>Let’s consider a tsunami such as the one that struck the Japanese coast in March of last year. When a tsunami first emerges as a result of an earthquake, <a href="http://vimeo.com/35919071">shallow water modelling techniques</a> give us the most accurate view of the wave’s formation and early movement.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/7242/original/954hvrqn-1327963565.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/7242/original/954hvrqn-1327963565.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=641&fit=crop&dpr=1 600w, https://images.theconversation.com/files/7242/original/954hvrqn-1327963565.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=641&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/7242/original/954hvrqn-1327963565.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=641&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/7242/original/954hvrqn-1327963565.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=805&fit=crop&dpr=1 754w, https://images.theconversation.com/files/7242/original/954hvrqn-1327963565.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=805&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/7242/original/954hvrqn-1327963565.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=805&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Mesh modelling of water being poured into a glass.</span>
<span class="attribution"><span class="source">Mahesh Prakash</span></span>
</figcaption>
</figure>
<p>Once the wave is closer to the coast however, techniques known collectively as <a href="https://theconversation.com/superman-returns-but-whos-looking-after-his-water-680">smoothed particle hydrodynamics (SPH)</a> are better at predicting how the wave interacts with local geography. We’ve created <a href="http://vimeo.com/35917884">models of a hypothetical tsunami off the northern Californian coastline</a> to test this.</p>
<p>A dam break can also be modelled using SPH. The modelling shows how fast the water moves at certain times and in certain places, where water “overtops” hills and how quickly it reaches towns or infrastructure such as power stations.</p>
<p>This can help town planners to build mitigating structures and emergency services to co-ordinate an efficient response. Our models have been validated using historical data from a real dam that broke in California in 1928 – the <a href="http://www.semp.us/publications/biot_reader.php?BiotID=376">St. Francis Dam</a>.</p>
<p>Having established that <a href="http://www.youtube.com/watch?v=QSm1nDS6J2k">our modelling techniques</a> work better than others, we can apply them to a range of what-if situations.</p>
<p>In collaboration with the <a href="http://www.sasmac.cn/portal_space/articleListView.view?moduleId=2c9090c02aa6d647012aa6f7b5330022&moduleType=2&siteId=fc4f335929b0df0d0129b0e348f90003&isImage=1">Satellite Surveying and Mapping Application Centre in China</a> we tested scenarios such as the <a href="http://www.csiro.au/Portals/Media/CSIRO-dam-break-modelling-to-help-flood-planning.aspx">hypothetical collapse of the massive Geheyan Dam</a> in China.</p>
<figure>
<iframe src="https://player.vimeo.com/video/35610237" width="500" height="281" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</figure>
<p>We combined our modelling techniques with digital terrain models to get a realistic picture of how such a disaster would unfold and, therefore, what actions could mitigate it.</p>
<p>Our experience in developing and using these techniques over several decades allows us to combine them in unique ways for each situation.</p>
<p>We’ve modelled fluids not just for natural disaster planning but also movie special effects, hot metal production, water sports and even something as everyday as insurance.</p>
<p>Insurance companies have been looking to us for help to understand how natural disasters unfold. They cop a lot of media flak after disasters for not covering people affected. People living in low-lying areas have traditionally had difficulty accessing flood insurance and find themselves unprotected in flood situations.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/7246/original/wb8zhg5y-1327965650.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/7246/original/wb8zhg5y-1327965650.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=800&fit=crop&dpr=1 600w, https://images.theconversation.com/files/7246/original/wb8zhg5y-1327965650.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=800&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/7246/original/wb8zhg5y-1327965650.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=800&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/7246/original/wb8zhg5y-1327965650.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1005&fit=crop&dpr=1 754w, https://images.theconversation.com/files/7246/original/wb8zhg5y-1327965650.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1005&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/7246/original/wb8zhg5y-1327965650.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1005&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Caveman Chuck Coker</span></span>
</figcaption>
</figure>
<p>Insurers are starting to realise that the modelling of geophysical flows can provide a basis for predicting localised risk of damage due to flooding and make flood coverage a viable business proposition. One Australian insurance company has been working with us to quantify risk of inundation in particular areas.</p>
<p>Using data from the <a href="http://www.bom.gov.au/hydro/flood/qld/fld_reports/brisbane_jan1974.pdf">1974 Brisbane floods</a>, the floods of last year and fluid modelling data, an insurance company can reliably assess residents’ exposure to particular risks and thereby determine suitable premiums.</p>
<p>With evidence-based tools such as fluid modelling in their arsenal, decision-makers are better prepared for the future. That may be a future of more frequent natural disasters, a future with a more-densely-populated planet, or, more likely, a combination of both.</p>
<p><em>This article was co-authored by <a href="https://theconversation.com/profiles/paul-cleary-6681">Dr Paul Cleary</a>, leader of CSIRO’s computational modelling team.</em></p>
<p><strong>Further viewing:</strong></p>
<ul>
<li><a href="http://vimeo.com/35919071">Shallow water model of tsunami hitting Rottnest Island and Fremantle Harbour</a> – CSIRO</li>
<li><a href="http://vimeo.com/35917884">SPH model of tsunami hitting the California coastline</a> – CSIRO</li>
<li><a href="http://www.youtube.com/watch?v=QSm1nDS6J2k">CSIRO models catastrophic flooding</a> – CSIRO/YouTube</li>
</ul><img src="https://counter.theconversation.com/content/4946/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mahesh Prakash receives funding from AusAID.</span></em></p>Last year will go on record as one of significant natural disasters both in Australia and overseas. Indeed, the flooding of the Brisbane River in January is still making news as the Queensland floods inquiry…Mahesh Prakash, Principal Research Scientist, Fluid Dynamics, CSIROLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/49572012-01-30T19:33:59Z2012-01-30T19:33:59ZMake mine a double: Moore’s Law and the future of mathematics<figure><img src="https://images.theconversation.com/files/7196/original/yk8jfdzf-1327640125.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Our present achievements will look like child's play in a few years.</span> <span class="attribution"><span class="source"> Rinoninha</span></span></figcaption></figure><p>What do iPhones, Twitter, Netflix, cleaner cities, safer cars, state-of-the-art environmental management and modern medical diagnostics have in common? They are all made possible by Moore’s Law.</p>
<p>Moore’s Law stems from a <a href="ftp://download.intel.com/museum/Moores_Law/Articles-press_Releases/Gordon_Moore_1965_Article.pdf">seminal 1965 article</a> by Intel founder <a href="http://inventors.about.com/od/mstartinventors/p/Gordon-Moore.htm">Gordon Moore</a>. He wrote:</p>
<p>“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year … Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years. That means, by 1975, the number of components per integrated circuit for minimum cost will be 65,000.”</p>
<p>Moore noted that in 1965 engineering advances were enabling a doubling in semiconductor density every 12 months, but this rate was later modified to roughly 18 months. Informally, we may think of this as doubling computer performance.</p>
<p>In any event, Moore’s Law has now continued unabated for 45 years, defying several confident predictions it would soon come to a halt, and represents a sustained exponential rate of progress that is without peer in the history of human technology. Here is a graph of Moore’s Law, shown with the transistor count of various computer processors:</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/6993/original/n5njb9rj-1326775338.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/6993/original/n5njb9rj-1326775338.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/6993/original/n5njb9rj-1326775338.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=539&fit=crop&dpr=1 600w, https://images.theconversation.com/files/6993/original/n5njb9rj-1326775338.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=539&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/6993/original/n5njb9rj-1326775338.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=539&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/6993/original/n5njb9rj-1326775338.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=678&fit=crop&dpr=1 754w, https://images.theconversation.com/files/6993/original/n5njb9rj-1326775338.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=678&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/6993/original/n5njb9rj-1326775338.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=678&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Moore’s Law.</span>
<span class="attribution"><span class="source">Wikimedia</span></span>
</figcaption>
</figure>
<h2>Where we’re at with Moore’s Law</h2>
<p>At the present time, researchers are struggling to keep Moore’s Law on track. Processor clock rates have stalled, as chip designers have struggled to control energy costs and heat dissipation, but the industry’s response has been straightforward — simply increase the number of processor “cores” on a single chip, together with associated cache memory, so that aggregate performance continues to track or exceed Moore’s Law projections.</p>
<p>The capacity of leading-edge <a href="http://en.wikipedia.org/wiki/Dynamic_random-access_memory#Synchronous_dynamic_RAM_.28SDRAM.29">DRAM main memory chips</a> continues to advance apace with Moore’s Law. The current state of the art in computer memory devices is a 3D design, which will be jointly produced by IBM and Micron Technology, according to a <a href="http://www-03.ibm.com/press/us/en/pressrelease/36125.wss">December 2011 announcement</a> by IBM representatives.</p>
<p>As things stand, the best bet for the future of Moore’s Law are <a href="https://theconversation.com/dont-believe-the-hype-carbon-nanotubes-are-merely-extraordinary-321">nanotubes</a> — submicroscopic tubes of carbon atoms that have remarkable properties. </p>
<p>According to a recent <a href="http://www.nytimes.com/2011/12/06/science/silicons-possible-successors-include-carbon-nanotubes.html?pagewanted=all">New York Times article</a>, Stanford researchers have created prototype electronic devices by first growing billions of carbon nanotubes on a quartz surface, then coating them with an extremely fine layer of gold atoms. They then used a piece of tape (literally!) to pick the gold atoms up and transfer them to a silicon wafer. The researchers believe that commercial devices could be made with these components as early as 2017.</p>
<h2>Moore’s Law in science and maths</h2>
<p>So what does this mean for researchers in science and mathematics?</p>
<p>Plenty, as it turns out. A scientific laboratory typically uses hundreds of high-precision devices that rely crucially on electronic designs, and with each step of Moore’s Law, these devices become ever cheaper and more powerful. One prominent case is <a href="http://www.wiley.com/college/pratt/0471393878/student/animations/dna_sequencing/index.html">DNA sequencers</a>. When scientists first completed sequencing a human genome in 2001, at a cost of several hundred million US dollars, observers were jubilant at the advances in equipment that had made this possible. </p>
<p>Now, only ten years later, researchers expect to reduce this cost to only <a href="http://www.nytimes.com/2011/12/06/science/silicons-possible-successors-include-carbon-nanotubes.html?pagewanted=all">US$1,000 within two years</a> and genome sequencing may well become a standard part of medical practice. This astounding improvement is even faster than Moore’s Law!</p>
<p>Applied mathematicians have benefited from Moore’s Law in the form of scientific supercomputers, which typically employ hundreds of thousands of state-of-the-art components. These systems are used for tasks such as climate modelling, product design and biological structure calculations. </p>
<p>Today, the world’s most powerful system is a Japanese supercomputer that recently ran the industry-standard <a href="http://www.top500.org/project/linpack">Linpack benchmark test</a> at more than ten “<a href="http://www.techterms.com/definition/petaflops">petaflops</a>,” or, in other words, 10 quadrillion <a href="http://www.techterms.com/definition/fpu">floating-point</a> operations per second.</p>
<p>Below is a graph of the Linpack performance of the world’s leading-edge systems over the time period 1993-2011, courtesy of the website Top 500. Note that over this 18-year period, the performance of the world’s number one system has advanced more than five orders of magnitude. The current number one system is more powerful than the sum of the world’s top 500 supercomputers just four years ago.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/6994/original/9kgnzs6q-1326775340.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/6994/original/9kgnzs6q-1326775340.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/6994/original/9kgnzs6q-1326775340.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=381&fit=crop&dpr=1 600w, https://images.theconversation.com/files/6994/original/9kgnzs6q-1326775340.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=381&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/6994/original/9kgnzs6q-1326775340.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=381&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/6994/original/9kgnzs6q-1326775340.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=478&fit=crop&dpr=1 754w, https://images.theconversation.com/files/6994/original/9kgnzs6q-1326775340.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=478&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/6994/original/9kgnzs6q-1326775340.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=478&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Linpack performance over time.</span>
<span class="attribution"><span class="source">www.top500.org</span></span>
</figcaption>
</figure>
<p>Pure mathematicians have been a relative latecomer to the world of high-performance computing. The present authors well remember the era, just a decade or two ago, when the prevailing opinion in the community was that “real mathematicians don’t compute.” </p>
<p>But thanks to a new generation of mathematical software tools, not to mention the ingenuity of thousands of young, computer-savvy mathematicians worldwide, remarkable progress has been achieved in this arena as well (see our <a href="http://www.ams.org/notices/201110/rtx111001410p.pdf">2011 AMS Notices article</a> on exploratory experimentation in mathematics).</p>
<p>In 1963 <a href="http://en.wikipedia.org/wiki/Daniel_Shanks">Daniel Shanks</a>, who had <a href="https://theconversation.com/are-pis-days-numbered-39">calculated pi</a> to 100,000 digits, declared that computing one billion digits would be “forever impossible.” Yet this level was reached in 1989. In 1989, famous British physicist <a href="http://www.gap-system.org/%7Ehistory/Biographies/Penrose.html">Roger Penrose</a>, in the first edition of his best-selling book <a href="http://www.maths.adelaide.edu.au/media/talks/The_Emperors_New_Mind.pdf">The Emperor’s New Mind</a>, declared that humankind would likely never know whether a string of ten consecutive sevens occurs in the decimal expansion of pi. Yet this was found just eight years later, in 1997.</p>
<p>Computers are certainly being used for more than just <a href="http://carma.newcastle.edu.au/jon/normality.pdf">computing and analysing digits of pi</a>. In 2003, the American mathematician <a href="https://sites.google.com/site/thalespitt/bio">Thomas Hales</a> completed a computer-based proof of <a href="http://mathworld.wolfram.com/KeplerConjecture.html">Kepler’s conjecture</a>, namely the long-hypothesised fact that the simple way the grocer stacks oranges is in fact the optimal packing for equal-diameter spheres. Many other examples could be cited.</p>
<h2>Future prospects</h2>
<p>So what does the future hold? Assuming that Moore’s Law continues unabated at approximately the same rate as the present, and that obstacles in areas such as power management and system software can be overcome, we will see, by the year 2021, large-scale supercomputers that are 1,000 times more powerful and capacious than today’s state-of-the-art systems — “exaflops” computers (see <a href="http://www.nitrd.gov/subcommittee/hec/materials/The%20Future%20of%20Computing%20Performance-NAS%20study%20final.pdf">NAS Report</a>). Applied mathematicians eagerly await these systems for calculations, such as advanced climate models, that cannot be done on today’s systems.</p>
<p>Pure mathematicians will use these systems as well to intuit patterns, compute <a href="http://mathworld.wolfram.com/Integral.html">integrals</a>, search the space of mathematical identities, and solve intricate symbolic equations. If, as one of us discussed in a <a href="https://theconversation.com/if-i-had-a-blank-cheque-id-turn-ibms-watson-into-a-maths-genius-1213">recent Conversation article</a>, such facilities can be combined with machine intelligence, such as a variation of the hardware and software that enabled an IBM system to <a href="http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html">defeat the top human contestants</a> in the North American TV game show Jeopardy!, we may see a qualitative advance in mathematical discovery and even theory formation. </p>
<p>It is not a big leap to imagine that within the next ten years tailored and massively more powerful versions of <a href="https://theconversation.com/something-about-siri-has-the-iphone-virtual-assistant-become-the-apple-of-our-eye-4817">Siri</a> (Apple’s new iPhone assistant) will be an integral part of mathematics, not to mention medicine, law and just about every other part of human life.</p>
<p>Some observers, such as those in the <a href="http://en.wikipedia.org/wiki/Technological_singularity">Singularity movement</a>, are even more expansive, predicting a time just a few decades hence when technology will advance so fast that at the present time we cannot possibly conceive or predict the outcome. </p>
<p>Your present authors do not subscribe to such optimistic projections, but even if more conservative predictions are realised, it is clear that the digital future looks very bright indeed. We will likely look back at the present day with the same technological disdain with which we currently view the 1960s.</p>
<p><em>A version of this article first appeared on <a href="http://experimentalmath.info/blog/2012/01/moores-law-and-the-future-of-science-and-mathematics/">Math Drudge</a>.</em></p><img src="https://counter.theconversation.com/content/4957/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jon Borwein receives funding from ARC</span></em></p><p class="fine-print"><em><span>David H. Bailey does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>What do iPhones, Twitter, Netflix, cleaner cities, safer cars, state-of-the-art environmental management and modern medical diagnostics have in common? They are all made possible by Moore’s Law. Moore’s…Jonathan Borwein (Jon), Laureate Professor of Mathematics, University of NewcastleDavid H. Bailey, PhD; Senior Scientist, Lawrence Berkeley Laboratory (retired) and Research Fellow, University of California, DavisLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/11472011-09-21T20:36:14Z2011-09-21T20:36:14ZPerformance anxiety: the end of software’s free ride<figure><img src="https://images.theconversation.com/files/3773/original/aapone-20110201000295842913-topshots-japan-entertainment-magic-technology-ipad-original.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The factors fuelling advances in computer hardware are drying up.</span> <span class="attribution"><span class="source">Yoshikazu Tsuno/AFP</span></span></figcaption></figure><p>We are consumers of software that is ever more capable, diverse and clever. </p>
<p>Our Google queries, our Facebook experience, our ability to play HD movies on our iPads, and the convenience of reading emails on our phones, all depend on computing power that we don’t see and don’t usually give a second thought to.</p>
<p>This progress is quietly driven by improvements in computer hardware. Many of us find it unremarkable that a $600 iPad can <a href="http://bits.blogs.nytimes.com/2011/05/09/the-ipad-in-your-hand-as-fast-as-a-supercomputer-of-yore/">outperform</a> the <a href="http://en.wikipedia.org/wiki/Cray-2">Cray 2</a>, which not so long ago was the fastest computer on Earth.</p>
<p>Unfortunately the source of these endless performance improvements is drying up, and the free ride so long enjoyed by software developers is in jeopardy. Worse, this is occurring at a time when <a href="http://online.wsj.com/article/SB10001424053111903480904576512250915629460.html?mod=WSJ_article_comments#articleTabs%3Darticle">software has become more important than ever</a>.</p>
<p>For decades, hardware advances were fueled by two givens: <a href="http://download.intel.com/museum/Moores_Law/Printed_Materials/Moores_Law_2pg.pdf">Moore’s Law</a> and <a href="http://dx.doi.org/10.1109/N-SSC.2007.4785534">Dennard Scaling</a>. </p>
<p>Moore’s Law says that the number of transistors on a chip <a href="http://theconversation.com/double-or-nothing-could-quantum-computing-replace-moores-law-362">roughly doubles every two years</a> as advances in device physics yield smaller transistors. </p>
<p>Dennard Scaling is less well known but no less significant – it states that as a transistor shrinks, both <a href="http://www.physics.unlv.edu/%7Ebill/PHYS483/transbas.pdf">switching time</a> (the time needed for a transistor to go from a non-conducting state to a conducting state) and power consumption will fall proportionately.</p>
<p>Together these tell us that we should expect transistors to get smaller, faster and more power efficient with every technology generation. </p>
<p>For many years this was true – hardware simply got faster, delivering performance improvements that lead to all of our software running faster, as if by magic. </p>
<h2>Multicore</h2>
<p>Unfortunately, physics got in the way in the end. Wire delay (the time it takes a signal to propagate along a length of microscopic wire) <a href="http://dx.doi.org/10.1145/339647.339691">became a limiting factor</a>.</p>
<p>While for many years a signal could traverse the entire chip at each tick of the computer’s clock, today only a tiny fraction of the chip is accessible in the time it takes the clock to tick. This is because today’s clocks run faster and today’s on-chip wires are so small that signals propagate more slowly.</p>
<p>To combat this problem, hardware manufacturers turned to <a href=".techspot.com/news/19115-multicore-architectures-explained.html">multicore</a> designs (a single computing component with two or more independent processors, or “cores”). </p>
<p>Rather than using the surfeit of transistors to make the chip’s processor ever larger and more capable, they put multiple cores on each chip. </p>
<p>But just as two cars are unlikely to get you to work faster than one, the addition of another core is often unhelpful in completing a computing problem more quickly.</p>
<p>This observation was made famous by <a href="http://www.actscorp.com/acts/amdahl.htm">Gene Amdahl</a> back in 1967, when he coined what we now know as <a href="http://home.wlu.edu/%7Ewhaleyt/classes/parallel/topics/amdahl.html">Amdahl’s Law</a>: the speed-up gained by using multiple processors is limited by the time needed to complete the portion of the program that cannot be made parallel (i.e. spread across multiple cores).</p>
<p>What this means is that today’s software developers have a major challenge on their hands. </p>
<p>Hardware advances, which were once delivered as transparent performance improvements (a faster car) now increasingly come in the form of hardware parallelism (two cars). The former meant existing programs ran faster as if by magic. The latter is only helpful for particular classes of problem (moving a football team, perhaps).</p>
<p>When this situation became clear in 2007, Stanford University President <a href="http://www.stanford.edu/dept/president/biography/">John Hennessy</a> <a href="http://queue.acm.org/detail.cfm?id=1189286">said</a>: </p>
<p>“When we start talking about parallelism and ease of use of truly parallel computers, we’re talking about a problem that’s as hard as any that computer science has faced … I would be panicked if I were in industry.”</p>
<p>Unfortunately things are set to get worse. Multicore hardware is just the first of three seismic changes that herald the end to software’s free ride.</p>
<h2>Heterogeneity</h2>
<p>Today’s multicore designs comprise a relatively straightforward combination of orthodox processor cores. </p>
<p>But acknowledging Amdahl’s law, <a href="http://www.eetimes.com/electronics-news/4076123/CPU-designers-debate-multi-core-future">many designers</a> now believe that we need a more complex <a href="http://www.cs.wisc.edu/multifacet/amdahl/">combination of simple and powerful cores</a> on each chip.</p>
<p>The portions of a task that do exhibit parallelism can be efficiently solved by many simple cores. </p>
<p>How so? Well, consider the problem of moving a large number of commuters across Manhattan: thousands of unsophisticated yellow taxis would be perfect for the job.</p>
<p>But those portions of the task that lack parallelism still require a large, capable core in order to be solved quickly. </p>
<p>In other words, consider the problem of getting a person to the moon: one very sophisticated Saturn V rocket would be appropriate to the task.</p>
<p>A heterogenous <a href="http://www.inetdaemon.com/tutorials/computers/hardware/cpu/">central processing unit</a> (CPU) – often referred to as “the brain” of computers – may offer both the taxies and the rocket, side by side.</p>
<p>Unfortunately, heterogeneity takes us even further from the world of transparent performance improvements. </p>
<p>This second major change in computer hardware means that software must now not only exhibit parallelism, but must also be capable of somehow effectively utilising complex, non-uniform hardware resources.</p>
<h2>Customisation and energy</h2>
<p>But it’s another major change that’s set to be most disruptive to computer science. Although Moore’s Law continues to deliver us transistors, Dennard Scaling is coming to an end. </p>
<p>In practice, power densities on chip have become so high that we can no longer fully power an entire chip lest we melt the silicon. This radically changes the economics of microarchitecture.</p>
<p>For the past 40 years, a relative scarcity of transistors lead to a mantra of generality. Customisation is an unjustifiable luxury when transistors are scarce, but energy is in good supply. Therefore each design must be as general as possible.</p>
<p>That mantra of generality, ingrained in the minds of generations of designers, needs a radical rethink as we become energy constrained. </p>
<p>This means that, as energy becomes the dominant concern, we must turn to <a href="http://dx.doi.org/10.1145/1941487.1941507">custom chip designs</a>.</p>
<p>This flies in the face of orthodox hardware design and has the potential to enormously complicate the task for software designers who must efficiently harness a large, complex, non-uniform set of computing resources.</p>
<p>If this were not enough, programmers, trained for decades to obsess over performance, now have an entirely new focus: energy. </p>
<p>To complicate matters further, programmers are not only not trained to optimise for energy, there are actually <a href="http://dx.doi.org/10.1145/1961296.1950402">few tools</a> to help them do so.</p>
<p>Thus software developers suddenly find themselves having to: </p>
<p>1) adapt to parallel hardware
2) adapt to heterogenous hardware
3) understand and optimise for energy rather than performance. </p>
<p>These are enormous challenges, and it will be fascinating to see how the software industry adapts.</p>
<h2>Where to next?</h2>
<p>The human capacity for innovation is breathtaking. A case in point is Intel’s <a href="http://newsroom.intel.com/docs/DOC-2032">announcement</a> earlier this year that its <a href="http://www.anandtech.com/show/4313/intel-announces-first-22nm-3d-trigate-transistors-shipping-in-2h-2011">3D tri-gate transistor</a> is ready for commercial use after about ten years of development.</p>
<p>At a time when we thought there was little room to move in transistor design, a deceptively simple idea changes the way we build the most fundamental element of computing technology. </p>
<p>This promises great improvements to performance and power consumption – which is particularly important for mobile devices.</p>
<p>The computing industry is extremely competitive, so Intel’s competitors will be hard at work developing competing technologies.</p>
<p>We find ourselves at a point of enormous change. The foundations of the computing landscape are radically shifting at a time when our appetite for software is growing faster than ever. </p>
<p>It’s hard to imagine where these trajectories will take us, but for computer science researchers the challenges are both imposing and exciting.</p><img src="https://counter.theconversation.com/content/1147/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Steve Blackburn receives funding from Intel, IBM, Google, and the Australian Research Council.</span></em></p>We are consumers of software that is ever more capable, diverse and clever. Our Google queries, our Facebook experience, our ability to play HD movies on our iPads, and the convenience of reading emails…Steve Blackburn, Associate Professor of Computer Science, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/3622011-06-02T05:50:33Z2011-06-02T05:50:33ZDouble or nothing: could quantum computing replace Moore’s Law?<figure><img src="https://images.theconversation.com/files/1494/original/3398869062_7f5883d7f4_b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Have we reached the limit of how small computer hardware can be?</span> <span class="attribution"><span class="source">Mullenkedheim/Flickr</span></span></figcaption></figure><p>Ever noticed that computers become outdated remarkably quickly? It’s nice to have increasingly powerful computers available, and very profitable for the computer industry to have a new model available every couple of years. So how do they manage to make it happen?</p>
<p>Every new generation of computer hardware has roughly twice the processing power of the version two years before it. It’s a phenomenon known as <a href="http://www.intel.com/technology/mooreslaw/">Moore’s Law</a> and it’s held true for nearly 50 years.</p>
<p>But could Moore’s Law be coming to an end? Could we be reaching the limit of how fast computer processors can actually be? And if so, what then?</p>
<h2>End of an era</h2>
<p>Moore’s Law states that the number of transistors that fit on a certain area on a computer chip doubles every two years.</p>
<p>In the past few years, it’s become clear that we’re reaching the limit of just how small, and just how powerful, we can make processors. As a result, developers are now looking towards radical design changes, using exotic materials, and applying plenty of creative thinking in the quest for solutions.</p>
<p>One of the fields attracting a lot of attention is the study of quantum behaviour of electrons and how this applies to computing.</p>
<h2>A quantum future</h2>
<p>Existing (or “classical”) computer hardware works by storing data in a binary format within transistors. The smallest piece of information – a “bit” – can have one of two states: “off” or “on”, “0” or “1”.</p>
<p>Quantum computing, on the other hand, allows us to use many physical systems (such as electrons, photons, or tiny magnets) as quantum bits, or “qubits”.</p>
<p>These qubits can be engineered to contain the same binary information as classical bits – i.e. “0” or “1” – but, that’s not all. Unlike any existing computer, one made of qubits can also encode an exponentially-larger amount of information than a simple binary state.</p>
<p>Let’s put this into perspective.</p>
<p>Fourteen bits in your computer’s central processing unit (CPU) can contain, well, 14 bits of binary information – 14 pieces of information which are either “0” or “1”. </p>
<p>Conversely, <a href="http://www.scientificamerican.com/blog/post.cfm?id=physicists-entangle-a-record-breaki-2011-04-05">14 qubits in a quantum computer</a> can contain the equivalent of 2<sup>14</sup> bits of information. That’s 16,384 bits, far more than the 14 pieces of binary information possible in a classical system.</p>
<p>Let’s take it one step further and use 300 qubits as an example. Three hundred qubits is the equivalent of 2<sup>300</sup> classical bits which is approximately the same as the number of particles in the entire universe.</p>
<h2>Tangled up in blue</h2>
<p>So how can quantum bits store so much more information than classical bits? Well, it’s all down to a phenomenon known as <a href="http://theconversation.com/teleporting-schrodingers-cats-not-easy-but-its-all-right-miaow-919">quantum entanglement</a>.</p>
<p>A quantum particle is said to be “entangled” with another when its properties are only defined in relation to the other. Two entangled quantum particles could be physically separated, but if you observe them individually you will find correlations between them that cannot be accounted for by assuming they act independently of each other. </p>
<p>It may appear as if acting on one particle influences the other one instantly, even faster than the speed of light.</p>
<p>In reality, the entanglement makes the particles acquire “non-local” properties. No “action at a distance” is required, and the principles of <a href="http://www.phys.unsw.edu.au/einsteinlight/">relativity</a> (i.e. no information can be transported faster than the speed of light) are respected.</p>
<p>Odd as this may sound, entangled particles create a distinguishable and legitimate state that can be used as a code to carry additional information without using additional bits. </p>
<p>The <a href="http://www.nature.com/nature/journal/v404/n6775/full/404247a0.html">availability of these entangled states</a> is the reason quantum bits can encode exponentially more information that classical ones.</p>
<h2>There’s always a “but”…</h2>
<p>While qubits can store an exponentially-greater amount of information than classical bits, quantum computing is still in its infancy.</p>
<p>In fact, at the moment, there are only a few examples where quantum computers can be used to complete tasks more effectively than classical hardware.These include:</p>
<ul>
<li>The ability to decipher <a href="http://theconversation.com/cracking-bin-ladens-computer-code-unlikely-1096">encrypted information</a> much faster than is currently possible</li>
<li>The ability to search an unsorted database quickly and effectively.</li>
</ul>
<p>The most advanced calculation done with quantum bits so far is the factoring of <a href="http://www.scottaaronson.com/blog/?p=208">15 = 3 × 5</a>. </p>
<p>This may seem unimpressive, but it proves that quantum computing can be used in this capacity. With more research and more time, we’ll be able to factorise extremely large numbers – ones that are thousands of digits long – in a matter of minutes, rather than the millions of years it would take now.</p>
<p>Given these limitations, it’s not true to say that quantum computers will be able to replace existing computers. For one thing, the expected <a href="http://www.pccomputernotes.com/clockspeed/clockspeed.htm">clock speed</a> of a quantum computer is not likely to be any faster than that of a classical one. </p>
<p>Therefore, if we run the the same algorithm on a quantum and on a classical computer, the classical one will usually win. Quantum computers will only be better if an algorithm exists where the presence of entangled quantum states can be exploited to reduce the number of steps required in a calculation. </p>
<p>At this stage we don’t know of any quantum algorithm to reduce the complexity of, say, web browsing or text editing, but the search is on.</p>
<h2>A quantum future, today</h2>
<p>Regardless of how powerful and widespread quantum computers will be in decades to come, the basic research being undertaken to construct these machines is already very useful in the construction of classical systems.</p>
<p>One of the most promising uses for quantum computing today involves the use of single atoms <a href="http://spectrum.ieee.org/computing/hardware/key-step-toward-a-silicon-quantum-computer">coupled to silicon transistors</a>. That is, the exact same components used in classical computers but scaled to single atoms.</p>
<p>In this way, many of the things we learn in the pursuit of a quantum computer can be reused for the purpose of pushing classical ones yet a step further in their miniaturisation. </p>
<p>Quantum computing won’t provide us with a replacement to classical computers if and when Moore’s Law grinds to a halt. </p>
<p>But it will help solve some interesting and challenging problems in computing.</p><img src="https://counter.theconversation.com/content/362/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrea Morello receives funding from the Australian Research Council and the Australian, NSW and U.S. Governments. He is affiliated with the ARC Centre of Excellence for Quantum Computation and Communication Technology.</span></em></p>Ever noticed that computers become outdated remarkably quickly? It’s nice to have increasingly powerful computers available, and very profitable for the computer industry to have a new model available…Andrea Morello, Associate Professor, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.