tag:theconversation.com,2011:/id/topics/supercomputers-838/articles
Supercomputers – The Conversation
2024-03-26T17:01:56Z
tag:theconversation.com,2011:article/226257
2024-03-26T17:01:56Z
2024-03-26T17:01:56Z
How long before quantum computers can benefit society? That’s Google’s US$5 million question
<figure><img src="https://images.theconversation.com/files/583117/original/file-20240320-26-rmpub2.jpg?ixlib=rb-1.1.0&rect=5%2C0%2C3828%2C2160&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/quantum-computer-black-background-3d-render-1571871052">Bartlomiej K. Wroblewski / Shutterstock</a></span></figcaption></figure><p>Google and the XPrize Foundation have launched a competition worth US$5 million (£4 million) to develop <a href="https://blog.google/technology/research/google-gesda-and-xprize-launch-new-competition-in-quantum-applications/">real-world applications for quantum computers</a> that benefit society – by speeding up progress on one of the UN Sustainable Development Goals, for example. The principles of quantum physics suggest quantum computers could perform very fast calculations on particular problems, so this competition may expand the range of applications where they have an advantage over conventional computers.</p>
<p>In our everyday lives, the way nature works can generally be described by what we call <a href="https://en.wikipedia.org/wiki/Classical_physics#:%7E:text=Classical%20physical%20concepts%20are%20often,of%20quantum%20mechanics%20and%20relativity.">classical physics</a>. But nature behaves very differently at tiny quantum scales – below the size of an atom. </p>
<p>The race to harness quantum technology can be viewed as a new industrial revolution, progressing from devices that use the properties of classical physics to those utilising the <a href="https://www.energy.gov/science/doe-explainsquantum-mechanics#:%7E:text=Quantum%20mechanics%20is%20the%20field,%E2%80%9Cwave%2Dparticle%20duality.%E2%80%9D">weird and wonderful properties of quantum mechanics</a>. Scientists have spent decades trying to develop new technologies by harnessing these properties. </p>
<p>Given how often we are told that <a href="https://projects.research-and-innovation.ec.europa.eu/en/horizon-magazine/quantum-technologies">quantum technologies</a> will revolutionise our everyday lives, you may be surprised that we still have to search for practical applications by offering a prize. However, while there are numerous examples of success using quantum properties for enhanced precision in sensing and timing, there has been a surprising lack of progress in the development of quantum computers that outdo their classical predecessors.</p>
<p>The main bottleneck holding up this development is that the software – using <a href="https://www.nature.com/articles/npjqi201523">quantum algorithms</a> –
needs to demonstrate an advantage over computers based on classical physics. This is commonly known as <a href="https://theconversation.com/what-is-quantum-advantage-a-quantum-computing-scientist-explains-an-approaching-milestone-marking-the-arrival-of-extremely-powerful-computers-213306">“quantum advantage”</a>.</p>
<p>A crucial way quantum computing differs from classical computing is in using a property known as <a href="https://spectrum.ieee.org/what-is-quantum-entanglement">“entanglement”</a>. Classical computing <a href="https://web.stanford.edu/class/cs101/bits-bytes.html">uses “bits”</a> to represent information. These bits consist of ones and zeros, and everything a computer does comprises strings of these ones and zeros. But quantum computing allows these bits to be in a <a href="https://azure.microsoft.com/en-gb/resources/cloud-computing-dictionary/what-is-a-qubit">“superposition” of ones and zeros</a>. In other words, it is as if these ones and zeros occur simultaneously in the quantum bit, or qubit.</p>
<p>It is this property which allows computational tasks to be performed all at once. Hence the belief that quantum computing can offer a significant advantage over classical computing, as it is able to perform many computing tasks at the same time. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-is-quantum-advantage-a-quantum-computing-scientist-explains-an-approaching-milestone-marking-the-arrival-of-extremely-powerful-computers-213306">What is quantum advantage? A quantum computing scientist explains an approaching milestone marking the arrival of extremely powerful computers</a>
</strong>
</em>
</p>
<hr>
<h2>Notable quantum algorithms</h2>
<p>While performing many tasks simultaneously should lead to a performance increase over classical computers, putting this into practice has proven more difficult than theory would suggest. There are actually only a few notable quantum algorithms which can perform their tasks better than those using classical physics.</p>
<figure class="align-center ">
<img alt="Quantum chips - rendering" src="https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=369&fit=crop&dpr=1 600w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=369&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=369&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=464&fit=crop&dpr=1 754w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=464&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=464&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/futuristic-cpu-quantum-processor-global-computer-1210158169">Yurchanka Siarhei / Shutterstock</a></span>
</figcaption>
</figure>
<p>The most notable are the <a href="https://www.st-andrews.ac.uk/physics/quvis/simulations_html5/sims/cryptography-bb84/Quantum_Cryptography.html">BB84 protocol</a>, developed in 1984, and <a href="https://www.nature.com/articles/s41598-021-95973-w">Shor’s algorithm</a>, developed in 1994, both of which use entanglement to outperform classical algorithms on particular tasks. </p>
<p>The BB84 protocol is a cryptographic protocol – a system for ensuring secure, private communication between two or more parties which is considered more secure than comparable classical algorithms.</p>
<p>Shor’s algorithm uses entanglement to demonstrate how current <a href="https://www.rand.org/pubs/commentary/2023/09/when-a-quantum-computer-is-able-to-break-our-encryption.html#:%7E:text=One%20of%20the%20most%20important,secure%20internet%20traffic%20against%20interception.">classical encryption protocols can be broken</a>, because they are based on the factorisation of very large numbers. <a href="https://ieeexplore.ieee.org/document/365700">There is also evidence</a> that it can perform certain calculations faster than similar algorithms designed for conventional computers. </p>
<p>Despite the superiority of these two algorithms over conventional ones, few advantageous quantum algorithms have followed. However, researchers have not given up trying to develop them. Currently, there are a couple of main directions in research.</p>
<h2>Potential quantum benefits</h2>
<p>The first is to use quantum mechanics to assist in what are called <a href="https://arxiv.org/abs/2312.02279">large-scale optimisation tasks</a>. Optimisation – finding the best or most effective way to solve a particular task – is vital in everyday life, from ensuring traffic flow runs effectively, to managing operational procedures in factory pipelines, to streaming services deciding what to recommend to each user. It seems clear that quantum computers could help with these problems.</p>
<p>If we could reduce the computational time required to perform the optimisation, it could save energy, reducing the carbon footprint of the many computers currently performing these tasks around the world and the data centres supporting them.</p>
<p>Another development that could offer wide-reaching benefits is to use quantum computation to simulate systems, such as combinations of atoms, that behave according to quantum mechanics. Understanding and predicting how quantum systems work in practice could, for example, lead to better drug design and medical treatments. </p>
<p>Quantum systems could also lead to improved electronic devices. As computer chips get smaller, quantum effects take hold, potentially reducing the devices’s performance. A better fundamental understanding of quantum mechanics could help avoid this.</p>
<p>While there has been significant investment in building quantum computers, there has been less focus on ensuring they will directly benefit the public. However, that now appears to be changing.</p>
<p>Whether we will all have quantum computers in our homes within the next 20 years remains doubtful. But, given the current financial commitment to making quantum computation a practical reality, it seems that society is finally in a better position to make use of them. What precise form will this take? There’s US$5 million dollars on the line to find out.</p><img src="https://counter.theconversation.com/content/226257/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Adam Lowe does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Quantum computing has huge promise from a technical perspective, but the practical benefits are less clear.
Adam Lowe, Lecturer, School of Computer Science and Digital Technologies, Aston University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/220044
2023-12-18T16:17:12Z
2023-12-18T16:17:12Z
A new supercomputer aims to closely mimic the human brain — it could help unlock the secrets of the mind and advance AI
<figure><img src="https://images.theconversation.com/files/566252/original/file-20231218-15-hajmbj.jpg?ixlib=rb-1.1.0&rect=19%2C9%2C6470%2C3940&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/businessman-touching-digital-human-brain-cell-582507070">Sdecoret / Shutterstock</a></span></figcaption></figure><p>A supercomputer scheduled to go online in April 2024 will rival the estimated rate of operations in the human brain, <a href="https://www.westernsydney.edu.au/newscentre/news_centre/more_news_stories/world_first_supercomputer_capable_of_brain-scale_simulation_being_built_at_western_sydney_university">according to researchers in Australia</a>. The machine, called DeepSouth, is capable of performing 228 trillion operations per second. </p>
<p>It’s the world’s first supercomputer capable of simulating networks of neurons and synapses (key biological structures that make up our nervous system) at the scale of the human brain.</p>
<p>DeepSouth belongs to an approach <a href="https://www.nature.com/articles/s43588-021-00184-y">known as neuromorphic computing</a>, which aims to mimic the biological processes of the human brain. It will be run from the International Centre for Neuromorphic Systems at Western Sydney University.</p>
<p>Our brain is the most amazing computing machine we know. By distributing its
computing power to billions of small units (neurons) that interact through trillions of connections (synapses), the brain can rival the most powerful supercomputers in the world, while requiring only the same power used by a fridge lamp bulb.</p>
<p>Supercomputers, meanwhile, generally take up lots of space and need large amounts of electrical power to run. The world’s most powerful supercomputer, the <a href="https://www.hpe.com/uk/en/compute/hpc/cray/oak-ridge-national-laboratory.html">Hewlett Packard Enterprise Frontier</a>, can perform just over one quintillion operations per second. It covers 680 square metres (7,300 sq ft) and requires 22.7 megawatts (MW) to run. </p>
<p>Our brains can perform the same number of operations per second with just 20 watts of power, while weighing just 1.3kg-1.4kg. Among other things, neuromorphic computing aims to unlock the secrets of this amazing efficiency.</p>
<h2>Transistors at the limits</h2>
<p>On June 30 1945, the mathematician and physicist <a href="https://www.ias.edu/von-neumann">John von Neumann</a> described the design of a new machine, the <a href="https://ieeexplore.ieee.org/document/194089">Electronic Discrete Variable Automatic Computer (Edvac)</a>. This effectively defined the modern electronic computer as we know it. </p>
<p>My smartphone, the laptop I am using to write this article and the most powerful supercomputer in the world all share the same fundamental structure introduced by von Neumann almost 80 years ago. <a href="https://www.sciencedirect.com/topics/computer-science/von-neumann-architecture">These all have distinct processing and memory units</a>, where data and instructions are stored in the memory and computed by a processor.</p>
<p>For decades, the number of transistors on a microchip doubled approximately every two years, <a href="https://ieeexplore.ieee.org/abstract/document/591665">an observation known as Moore’s Law</a>. This allowed us to have smaller and cheaper computers. </p>
<p>However, transistor sizes are now approaching the atomic scale. At these tiny sizes, excessive heat generation is a problem, as is a phenomenon called quantum tunnelling, which interferes with the functioning of the transistors. <a href="https://qz.com/852770/theres-a-limit-to-how-small-we-can-make-transistors-but-the-solution-is-photonic-chips#:%7E:text=They're%20made%20of%20silicon,we%20can%20make%20a%20transistor.">This is slowing down</a> and will eventually halt transistor miniaturisation.</p>
<p>To overcome this issue, scientists are exploring new approaches to
computing, starting from the powerful computer we all have hidden in our heads, the human brain. Our brains do not work according to John von Neumann’s model of the computer. They don’t have separate computing and memory areas. </p>
<p>They instead work by connecting billions of nerve cells that communicate information in the form of electrical impulses. Information can be passed from <a href="https://qbi.uq.edu.au/brain-basics/brain/brain-physiology/action-potentials-and-synapses">one neuron to the next through a junction called a synapse</a>. The organisation of neurons and synapses in the brain is flexible, scalable and efficient. </p>
<p>So in the brain – and unlike in a computer – memory and computation are governed by the same neurons and synapses. Since the late 1980s, scientists have been studying this model with the intention of importing it to computing.</p>
<figure class="align-center ">
<img alt="Microchip." src="https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The continuing miniaturisation of transistors on microchips is limited by the laws of physics.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/close-presentation-new-generation-microchip-gloved-691548583">Gorodenkoff / Shutterstock</a></span>
</figcaption>
</figure>
<h2>Imitation of life</h2>
<p>Neuromorphic computers are based on intricate networks of simple, elementary processors (which act like the brain’s neurons and synapses). The main advantage of this is that these machines <a href="https://www.electronicsworld.co.uk/advances-in-parallel-processing-with-neuromorphic-analogue-chip-implementations/34337/">are inherently “parallel”</a>. </p>
<p>This means that, <a href="https://www.pnas.org/doi/full/10.1073/pnas.95.3.933">as with neurons and synapses</a>, virtually all the processors in a computer can potentially be operating simultaneously, communicating in tandem.</p>
<p>In addition, because the computations performed by individual neurons and synapses are very simple compared with traditional computers, the energy consumption is orders of magnitude smaller. Although neurons are sometimes thought of as processing units, and synapses as memory units, they contribute to both processing and storage. In other words, data is already located where the computation requires it.</p>
<p>This speeds up the brain’s computing in general because there is no separation between memory and processor, which in classical (von Neumann) machines causes a slowdown. But it also avoids the need to perform a specific task of accessing data from a main memory component, as happens in conventional computing systems and consumes a considerable amount of energy. </p>
<p>The principles we have just described are the main inspiration for DeepSouth. This is not the only neuromorphic system currently active. It is worth mentioning the <a href="https://www.humanbrainproject.eu">Human Brain Project (HBP)</a>, funded under an <a href="https://ec.europa.eu/futurium/en/content/fet-flagships.html">EU initiative</a>. The HBP was operational from 2013 to 2023, and led to BrainScaleS, a machine located in Heidelberg, in Germany, that emulates the way that neurons and synapses work. </p>
<p><a href="https://www.humanbrainproject.eu/en/science-development/focus-areas/neuromorphic-computing/hardware/">BrainScaleS</a> can simulate the way that neurons “spike”, the way that an electrical impulse travels along a neuron in our brains. This would make BrainScaleS an ideal candidate to investigate the mechanics of cognitive processes and, in future, mechanisms underlying serious neurological and neurodegenerative diseases.</p>
<p>Because they are engineered to mimic actual brains, neuromorphic computers could be the beginning of a turning point. Offering sustainable and affordable computing power and allowing researchers to evaluate models of neurological systems, they are an ideal platform for a range of applications. They have the potential to both advance our understanding of the brain and offer new approaches to artificial intelligence.</p><img src="https://counter.theconversation.com/content/220044/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Domenico Vicinanza does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Neuromorphic computers aim to one day replicate the amazing efficiency of the brain.
Domenico Vicinanza, Associate Professor of Intelligent Systems and Data Science, Anglia Ruskin University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/204905
2023-05-08T20:11:01Z
2023-05-08T20:11:01Z
Supercomputers have revealed the giant ‘pillars of heat’ funnelling diamonds upwards from deep within Earth
<figure><img src="https://images.theconversation.com/files/524817/original/file-20230508-27-u0wox4.jpg?ixlib=rb-1.1.0&rect=32%2C24%2C5359%2C3564&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Most diamonds are formed deep inside Earth and brought close to the surface in small yet powerful volcanic eruptions of a kind of rock called “kimberlite”. </p>
<p>Our <a href="https://www.nature.com/articles/s41561-023-01181-8">supercomputer modelling</a>, published in Nature Geoscience, shows these eruptions are fuelled by giant “pillars of heat” rooted 2,900 kilometres below ground, just above our planet’s core.</p>
<p>Understanding Earth’s internal history can be used to target mineral reserves – not only diamonds, but also crucial minerals such as nickel and rare earth elements. </p>
<h2>Kimberlite and hot blobs</h2>
<p>Kimberlite eruptions leave behind a characteristic deep, carrot-shaped “pipe” of kimberlite rock, which often contains diamonds. <a href="https://www.sciencedirect.com/science/article/abs/pii/S0012821X17307124">Hundreds of these eruptions</a> that occurred over the past 200 million years have been discovered around the world. Most of them were found in Canada (178 eruptions), South Africa (158), Angola (71) and Brazil (70).</p>
<p><iframe id="FbdgL" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/FbdgL/3/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>Between Earth’s solid crust and molten core is the mantle, a thick layer of slightly goopy hot rock. For decades, geophysicists have used computers to study how the mantle slowly flows over long periods of time. </p>
<p>In the 1980s, <a href="https://www.sciencedirect.com/science/article/pii/0012821X84900438">one study showed</a> that kimberlite eruptions might be linked to small thermal plumes in the mantle – feather-like upward jets of hot mantle rising due to their higher buoyancy – beneath slowly moving continents. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/volcanoes-diamonds-and-blobs-a-billion-year-history-of-earths-interior-shows-its-more-mobile-than-we-thought-179673">Volcanoes, diamonds, and blobs: a billion-year history of Earth's interior shows it's more mobile than we thought</a>
</strong>
</em>
</p>
<hr>
<p>It had <a href="https://www.nature.com/articles/230042a0">already been argued</a>, in the 1970s, that these plumes might originate from the boundary between the mantle and the core, at a depth of 2,900km.</p>
<p>Then, in 2010, <a href="https://www.nature.com/articles/nature09216">geologists proposed</a> that kimberlite eruptions could be explained by thermal plumes arising from the edges of two deep, hot blobs anchored under Africa and the Pacific Ocean.</p>
<p>And last year, <a href="https://theconversation.com/volcanoes-diamonds-and-blobs-a-billion-year-history-of-earths-interior-shows-its-more-mobile-than-we-thought-179673">we reported that</a> these anchored blobs are more mobile than we thought.</p>
<p>However, we still didn’t know exactly how activity deep in the mantle was driving kimberlite eruptions.</p>
<h2>Pillars of heat</h2>
<p>Geologists assumed that mantle plumes could be responsible for igniting kimberlite eruptions. However, there was still a big question remaining: how was heat being transported from the deep Earth up to the kimberlites?</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/524728/original/file-20230506-35349-epn7ao.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/524728/original/file-20230506-35349-epn7ao.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=494&fit=crop&dpr=1 600w, https://images.theconversation.com/files/524728/original/file-20230506-35349-epn7ao.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=494&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/524728/original/file-20230506-35349-epn7ao.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=494&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/524728/original/file-20230506-35349-epn7ao.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=621&fit=crop&dpr=1 754w, https://images.theconversation.com/files/524728/original/file-20230506-35349-epn7ao.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=621&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/524728/original/file-20230506-35349-epn7ao.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=621&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A snapshot of the global mantle convection model centred on subduction underneath the South American plate.</span>
<span class="attribution"><span class="source">Ömer F. Bodur</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>To address this question, we used <a href="https://nci.org.au">supercomputers</a> in Canberra, Australia to create three-dimensional geodynamic models of Earth’s mantle. Our models account for the movement of continents on the surface and into the mantle over the past one billion years. </p>
<p>We calculated the movements of heat upward from the core and discovered that broad mantle upwellings, or “pillars of heat”, connect the very deep Earth to the surface. Our modelling shows these pillars supply heat underneath kimberlites, and they explain most kimberlite eruptions over the past 200 million years. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/524816/original/file-20230508-23-g6oon7.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/524816/original/file-20230508-23-g6oon7.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=303&fit=crop&dpr=1 600w, https://images.theconversation.com/files/524816/original/file-20230508-23-g6oon7.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=303&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/524816/original/file-20230508-23-g6oon7.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=303&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/524816/original/file-20230508-23-g6oon7.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=381&fit=crop&dpr=1 754w, https://images.theconversation.com/files/524816/original/file-20230508-23-g6oon7.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=381&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/524816/original/file-20230508-23-g6oon7.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=381&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A schematic representation of Earth’s heat pillars and how they bring heat to kimberlites, based on output from our geodynamic model.</span>
<span class="attribution"><span class="source">Ömer F. Bodur</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>The model successfully captured kimberlite eruptions in Africa, Brazil, Russia and partly in the United States and Canada. Our models also predict previously undiscovered kimberlite eruptions occurred in East Antarctica and the Yilgarn Craton of Western Australia. </p>
<figure>
<iframe src="https://player.vimeo.com/video/824007338" width="500" height="281" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
<figcaption><span class="caption">Earth’s “pillars of heat” in a global mantle convection model can be used to predict kimberlite eruptions. Credit: Ömer F. Bodur.</span></figcaption>
</figure>
<p>Towards the centre of the pillars, mantle plumes rise much faster and carry dense material across the mantle, which may explain chemical differences between kimberlites in <a href="https://www.nature.com/articles/s41561-023-01181-8">different continents</a>.</p>
<p>Our models do not explain some of the kimberlites in Canada, which might be related to a different geological process called “plate subduction”. We have so far predicted kimberlites back to one billion years ago, which is the current limit of <a href="https://www.sciencedirect.com/science/article/abs/pii/S0012825220305237">reconstructions of tectonic plate movements</a>.</p><img src="https://counter.theconversation.com/content/204905/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ömer Bodur was supported by funding from the Australian Research Council and from De Beers.</span></em></p><p class="fine-print"><em><span>Nicolas Flament receives funding from the Australian Research Council and from De Beers.</span></em></p>
The volcanic eruptions that bring diamonds to Earth’s surface are driven by ‘pillars of heat’ stretching deep inside the planet.
Ömer F. Bodur, Honorary Fellow, University of Wollongong
Nicolas Flament, Associate Professor, University of Wollongong
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/196473
2023-03-22T12:39:31Z
2023-03-22T12:39:31Z
Building better brain collaboration online – despite scientific squabbles, the decade-long Human Brain Project brought measurable success to neuroscience collaboration
<figure><img src="https://images.theconversation.com/files/515898/original/file-20230316-1755-h1n8e9.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2101%2C1427&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Bringing scientific research online can help improve collaboration to a degree. </span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/image-of-people-walking-in-a-high-speed-data-space-royalty-free-image/1349695388">Hiroshi Watanabe/DigitalVision via Getty Images</a></span></figcaption></figure><p>Recent years have seen both impressive <a href="https://doi.org/10.1038/nrn3578">advances in computational technologies and neuroscience</a> and <a href="https://doi.org/10.1016/S2215-0366(21)00395-3">increasing prevalence of mental disorders</a>. These forces sparked the launch of <a href="https://doi.org/10.1038/nn.4371">brain science initiatives</a> worldwide. In the past decade, a “<a href="https://theconversation.com/the-brain-race-can-giant-computers-map-the-mind-12342">brain race</a>” between Europe, <a href="https://theconversation.com/illuminating-the-brain-one-neuron-and-synapse-at-a-time-5-essential-reads-about-how-researchers-are-using-new-tools-to-map-its-structure-and-function-187607">the U.S.</a>, Israel, Japan and China has taken off with the goal of <a href="https://www.nationalgeographic.com/science/article/the-science-of-big-science">understanding human brain function</a>.</p>
<p>One of the earliest brain initiatives was the 10-year, 1 billion-euro (US$1.33 billion in 2013) <a href="https://www.humanbrainproject.eu/en/about/overview/">Human Brain Project</a>, which launched in 2013 as a flagship science initiative of the European Commission’s <a href="https://web.archive.org/web/20181222034306/https://ec.europa.eu/digital-single-market/en/fet-flagships">Future and Emerging Technologies program</a>. The project <a href="https://www.youtube.com/watch?v=JqMpGrM5ECo">initially sought</a> to <a href="https://doi.org/10.1016/j.procs.2011.12.015">simulate the entire human brain</a> in a supercomputer within a decade, continuing the work its founder, neuroscientist <a href="https://scholar.google.com/citations?user=W3lyJF8AAAAJ&hl=en">Henry Markram</a>, started with his 2005 <a href="https://doi.org/10.1038/nrn1848">Blue Brain Project</a>. Not only did it seek to digitize the brain, but research and laboratory work were also <a href="https://doi.org/10.1016/j.procs.2011.12.015">designed to be completely digital</a>, with researchers distributed across Europe.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/LS3wMC2BpxU?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The initial goal of the Human Brain Project was to simulate the entire human brain in a supercomputer.</span></figcaption>
</figure>
<p>However, the project was rife with controversy among neuroscientists worldwide. It <a href="https://doi.org/10.1038/482456a">faced skepticism</a> before it even started and gathered <a href="https://doi.org/10.1038/513027a">heated criticism</a> and <a href="https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/">debate</a> once funded. After over 800 neuroscientists worldwide <a href="https://web.archive.org/web/20160621075754/http://neurofuture.eu/">signed an open letter</a> calling for a revamp of the program, it was <a href="https://doi.org/10.1038/511133a">completely reorganized</a> in 2015. From then on, its aim was to develop a European digital research infrastructure to advance brain science and create “<a href="https://doi.org/10.1016/j.neuron.2016.10.046">brain-inspired information technology</a>.”</p>
<p>Now, 10 years later, the project is coming to a close. It remains an open question whether it achieved its goals.</p>
<p><a href="https://www.lucyxiaoluwang.com/">We are</a> <a href="https://www.ip.mpg.de/en/persons/kreyer-ann-christin.html">economists</a> who study how <a href="https://scholar.google.com/citations?user=M0QlVjcAAAAJ&hl=en">digital infrastructure</a> can help scientists collaborate in challenging times. Our <a href="https://doi.org/10.1371/journal.pone.0278402">recently published research</a> found that while the Human Brain Project experienced major changes in its structure and goals, it was able to promote collaboration through its online forum. </p>
<h2>Evolving research focuses</h2>
<p>The project was composed of <a href="https://doi.org/10.1016/j.neuron.2016.10.046">scientists from various disciplines</a>, including neuroscience, computer science, physics, informatics and mathematics. More than 500 scientists and engineers at over 120 research institutions across Europe and beyond have <a href="https://www.humanbrainproject.eu/en/about-hbp/human-brain-project-ebrains/">engaged in HBP research activities</a>.</p>
<p>Although many neuroscientists view <a href="https://doi.org/10.1016/j.neuron.2019.03.027">brain network simulation</a> as an important step to advance brain science, many others criticized the project’s <a href="https://doi.org/10.1038/nature.2015.18704">initial focus on computer simulations</a>. Scientists argued that simulations will <a href="https://www.theguardian.com/science/2014/jul/07/human-brain-project-researchers-threaten-boycott">never be enough</a> to explain the <a href="https://doi.org/10.1038/511125a">function of the entire brain</a> without complementary experiments on animals or tissues. Some viewed the program as <a href="https://doi.org/10.1038/513027a">an IT project</a> rather than one on neuroscience. Others worried that <a href="https://doi.org/10.1038/513027a">other important research areas</a> would be neglected. Combined with perceived <a href="https://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/">lack of transparency</a> and <a href="https://doi.org/10.1038/513027a">mismatch between</a> the size of its task, time frame and setup, the reorganization the open letter called for was inevitable.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/501528/original/file-20221216-23-v4jcww.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Timeline of Human Brain Project milestones" src="https://images.theconversation.com/files/501528/original/file-20221216-23-v4jcww.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/501528/original/file-20221216-23-v4jcww.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=191&fit=crop&dpr=1 600w, https://images.theconversation.com/files/501528/original/file-20221216-23-v4jcww.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=191&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/501528/original/file-20221216-23-v4jcww.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=191&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/501528/original/file-20221216-23-v4jcww.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=240&fit=crop&dpr=1 754w, https://images.theconversation.com/files/501528/original/file-20221216-23-v4jcww.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=240&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/501528/original/file-20221216-23-v4jcww.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=240&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Human Brain Project aimed to achieve ambitious milestones despite major restructuring and controversy.</span>
<span class="attribution"><a class="source" href="https://doi.org/10.1371/journal.pone.0278402">Lucy Xiaolu Wang and Ann-Christin Kreyer</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>After revamping, the project dropped its original goal of complete brain simulation to focus on advancing brain sciences with computational science. </p>
<p>The project also started hosting supercomputer-powered online research platforms <a href="https://wiki.ebrains.eu/bin/view/Collabs/the-collaboratory/">on the Collaboratory</a> for researchers to virtually collaborate in 2016. This infrastructure enabled the development of <a href="https://doi.org/10.1016/j.neuron.2016.10.046">advanced software and complex brain simulations</a> by providing cloud-based platforms for collaboration and data storage, as well as data analytics, supercomputers and modeling tools. </p>
<p>In 2018, the platform host transitioned from the project to <a href="https://ebrains.eu/">EBRAINS</a> as an upgraded and permanent version powered by new E.U. neuroscience supercomputing centers. EBRAINS is intended to serve as the backbone for a pan-European online neuroscience research platform after the project ends. Through EBRAINS, the project’s research data, models, tools and results <a href="https://doi.org/10.1016/j.neuroimage.2022.118973">will be made accessible</a> for further research.</p>
<h2>The HBP online forum</h2>
<p>To complement the research platforms, the <a href="https://forum.humanbrainproject.eu/">Human Brain Project Forum</a> was launched in July 2015 to facilitate informal collaboration and knowledge-sharing. Users discussed both project-related activities and broad neuroscience programming challenges on this public forum. All topics and discussions could be viewed freely online, and anyone could make an account to post a question or comment on an existing thread. Opening the forum to the public was intended to facilitate the <a href="https://doi.org/10.1016/j.neuron.2016.10.046">exchange of results and expertise</a> with outside researchers to help achieve the project’s ambitious goals.</p>
<p>We wanted to know if the forum succeeded in its goal of <a href="https://doi.org/10.1371/journal.pone.0278402">connecting researchers</a> both within and beyond the project community. To answer this question, we examined patterns of user interaction and problem-solving on the forum from when it opened in July 2015 through March 2021. We measured user interaction by collecting data on all posted questions and replies, linked with available user information on the site or via public search. To analyze what factors facilitated collaborative problem-solving, we examined the solution status of the questions and users within each thread. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/501529/original/file-20221216-12-gy294k.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Diagram of Human Brain Project research focus areas and structure" src="https://images.theconversation.com/files/501529/original/file-20221216-12-gy294k.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/501529/original/file-20221216-12-gy294k.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=176&fit=crop&dpr=1 600w, https://images.theconversation.com/files/501529/original/file-20221216-12-gy294k.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=176&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/501529/original/file-20221216-12-gy294k.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=176&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/501529/original/file-20221216-12-gy294k.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=221&fit=crop&dpr=1 754w, https://images.theconversation.com/files/501529/original/file-20221216-12-gy294k.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=221&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/501529/original/file-20221216-12-gy294k.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=221&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The structure of the Human Brain Project platforms and the online forum.</span>
<span class="attribution"><a class="source" href="https://doi.org/10.1371/journal.pone.0278402">Lucy Xiaolu Wang and Ann-Christin Kreyer</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>We found that the average interaction within each posted thread is comparable to <a href="https://stackoverflow.com/">Stack Overflow</a>, a popular Q&A website for programmers. On average, each Human Brain Project forum thread <a href="https://doi.org/10.1371/journal.pone.0278402">received 3.7 replies</a> compared with <a href="https://data.stackexchange.com/stackoverflow/query/50588/minimum-maximum-and-average-number-of-answers-per-post">1.47 replies per question</a> on Stack Overflow. Despite a drop in usage during early 2020 at the start of the COVID-19 pandemic, forum use rose substantially in late 2020 and early 2021.</p>
<p>Questions about programming related to the project’s core research areas gathered more attention, active discussion and faster resolution. While questions that attracted users from many countries are discussed more actively, they took longer to resolve. Problems with administrator support were solved faster overall. Patterns of online interaction did not significantly differ by project affiliation status, gender or seniority level. </p>
<p>Overall, the forum appeared to be an inclusive online community that fostered collaboration.</p>
<h2>Digitizing the life sciences</h2>
<p>There is a need to partially digitize the traditionally more laboratory-based life sciences. The U.S. Department of Energy highlighted this need when it created the <a href="https://www.energy.gov/science/articles/national-virtual-biotechnology-laboratory-unites-doe-labs-against-covid-19">National Virtual Biotechnology Laboratory</a> in 2020, a consortium of national laboratories that uses supercomputer facilities to help scientists coordinate a united response against the COVID-19 pandemic.</p>
<p>But digitization doesn’t guarantee successful collaboration. While Europe’s Human Brain Project began with one specific goal that soon fell apart with controversy and disagreement, the ongoing U.S. <a href="https://braininitiative.nih.gov/">Brain Research Through Advancing Innovative Neurotechnologies Initiative</a> had no single vision. Following a more traditional research approach, multiple teams <a href="https://theconversation.com/illuminating-the-brain-one-neuron-and-synapse-at-a-time-5-essential-reads-about-how-researchers-are-using-new-tools-to-map-its-structure-and-function-187607">work independently on various topics</a>. The BRAIN Initiative had received <a href="https://www.ninds.nih.gov/sites/default/files/documents/BRAIN_Initiative_Technical_Summary_Flyer_508C.pdf">over $3 billion in funding by 2022</a> – three times the amount for the Human Brain Project.</p>
<p>While the long-term impact of the project may not be fully understood, the <a href="https://www.humanbrainproject.eu/en/summit-2023/">Human Brain Project Summit 2023</a> from March 28 to 31 is set to provide a venue for open discussion with the broader community on what the HBP has achieved. Institutional support for neuroscience research can yield tremendous returns, but it remains unclear how to best design scientific organizations and use digitization in the process. We believe studying the science of science research could help achieve the collaboration and shared goals these initiatives seek.</p><img src="https://counter.theconversation.com/content/196473/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
The European Union’s 10-year Human Brain Project is coming to a close. Whether this controversial 1 billion-euro project achieved its aims is unclear, but its online forum did foster collaboration.
Lucy Xiaolu Wang, Assistant Professor, Resource Economics Dept., UMass Amherst
Ann-Christin Kreyer, Ph.D. Candidate in Innovation and Entrepreneurship, Max Planck Institute for Innovation and Competition
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/191989
2022-10-25T13:03:37Z
2022-10-25T13:03:37Z
Could energy efficiency be quantum computers’ greatest strength yet?
<figure><img src="https://images.theconversation.com/files/490845/original/file-20221020-26-a4hkfz.jpg?ixlib=rb-1.1.0&rect=0%2C50%2C5568%2C3650&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The energy consumption of large computers is very high – so what about future quantum computers?</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/VT4rx775FT4">maximalfocus/Unsplash</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p><a href="https://www.youtube.com/watch?v=bayTbt_8aNc">Quantum computers</a> have attracted considerable interest of late for their potential to crack problems in a few hours where <a href="https://hal.inria.fr/hal-00925622/document">they might take the age of the universe</a> (i.e., tens of billions of years) on the best supercomputers. Their <a href="https://theconversation.com/retour-sur-les-technologies-quantiques-sont-elles-pretes-a-entrer-dans-nos-vies-159740">real-life applications</a> are manifold and range from drugs and materials design to solving complex optimisation problems. They are therefore primarily intended for scientific and industrial research.</p>
<p>Traditionally, <a href="https://theconversation.com/cartes-bleues-et-securite-des-echanges-vers-une-cryptographie-post-quantique-178478">“quantum supremacy”</a> is sought from the point of view of raw computing power: we want to calculate (much) faster.</p>
<p>However, the question of its energy consumption could also now warrant research, with current supercomputers sometimes consuming <a href="https://www.la-croix.com/France/Meta-supercalculateurs-quoi-faire-2022-01-25-1201196837">as much electricity as a small town</a> (which could in fact <a href="http://www.cai2.sk/ojs/index.php/cai/article/view/1960">limit the increase in their computing power</a>). Information technologies, at their end, accounted for <a href="https://www.nature.com/articles/s43246-020-0022-5">11% of global electricity consumption</a> in 2020.</p>
<h2>Why focus on the energy consumption of quantum computers?</h2>
<p>Since a quantum computer can solve problems in a few hours where a supercomputer might take several tens of billions of years, it is natural to expect it will consume much less energy. However, manufacturing such powerful quantum computers will require that we solve many scientific and technological challenges, potentially over one to several decades of research.</p>
<p>A more modest goal would be to create less powerful quantum computers, capable of solving computations in a time relatively comparable to supercomputers, but using much less energy.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/491064/original/file-20221021-20-ppcrlh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/491064/original/file-20221021-20-ppcrlh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/491064/original/file-20221021-20-ppcrlh.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/491064/original/file-20221021-20-ppcrlh.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/491064/original/file-20221021-20-ppcrlh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/491064/original/file-20221021-20-ppcrlh.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/491064/original/file-20221021-20-ppcrlh.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Google, Amazon, Microsoft and IBM are some of the tech giants to have taken an interest in quantum computing.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/ibm_research_zurich/40645906341">Graham Carlow</a></span>
</figcaption>
</figure>
<p>This potential energy benefit of quantum computing has already been discussed. Google’s <a href="https://www.nature.com/articles/s41586-019-1666-5">Sycamore quantum processor</a> consumes 26 kilowatts of electrical power, far less than a supercomputer, and runs a test quantum algorithm in seconds. Following the experiment, scientists put forward classical algorithms to simulate the quantum algorithm. The first proposals for classical algorithms required <a href="https://www.ibm.com/blogs/research/2019/10/on-quantum-supremacy/">much more energy</a> – which seemed to demonstrate the energy advantage of quantum computing, but they were soon followed by <a href="https://journals.aps.org/prx/abstract/10.1103/PhysRevX.10.041038">other proposals</a>, which were much more energy efficient.</p>
<p>The question of the energy advantage is therefore still open to question and is an open research topic, especially since the quantum algorithm performed by Sycamore has no identified “useful” application to date.</p>
<h2>Superposition: the fragile phenomenon at the heart of quantum computing</h2>
<p>To know whether quantum computers can be expected to provide an energy advantage, it is necessary to understand the fundamental laws according to which they operate.</p>
<p>Quantum computers manipulate physical systems called <a href="https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-a-qubit/">qubits</a> (for <em>quantum bits</em>) to perform a calculation. A qubit can take two values: 0 (the “ground state”, of minimum energy) and 1 (the “excited state”, of maximum energy). It can also occupy a “superposition” of 0 and 1. <a href="https://press.princeton.edu/books/hardcover/9780691183527/philosophy-of-physics">How we interpret superpositions is still the subject of heated philosophical debates</a>, but, to put it simply, it means that the qubit can be “both” in state 0 and state 1 with certain associated <a href="https://www.feynmanlectures.caltech.edu/III_03.html">“probability amplitudes”</a>.</p>
<p>Thanks to these probabilities, we can <a href="https://www.nature.com/articles/nature13460">greatly simplify</a> the principle of the quantum computer by saying that it implements algorithms that perform calculations on several numbers “at once” (in this case 0 and 1 at the same time). This advantage becomes clear when the number of qubits is increased: 300 qubits in superpositions are capable of representing 2 to the power of 300 states at the same time. As an example, <a href="https://www.youtube.com/watch?v=4cwtlmTfNYA">this is approximately the number of atoms in the observable universe</a> – so representing so many states <em>at once</em> on a supercomputer is completely unrealistic.</p>
<figure class="align-center ">
<img alt="A man stands by a quantum computer in California" src="https://images.theconversation.com/files/491014/original/file-20221021-12-dnas7u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/491014/original/file-20221021-12-dnas7u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=387&fit=crop&dpr=1 600w, https://images.theconversation.com/files/491014/original/file-20221021-12-dnas7u.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=387&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/491014/original/file-20221021-12-dnas7u.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=387&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/491014/original/file-20221021-12-dnas7u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=487&fit=crop&dpr=1 754w, https://images.theconversation.com/files/491014/original/file-20221021-12-dnas7u.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=487&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/491014/original/file-20221021-12-dnas7u.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=487&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Erik Lucero, lead engineer of Google Quantum AI, stands beside a quantum computer in Goleta, California in September.</span>
<span class="attribution"><a class="source" href="https://news.afp.com/#/c/main/search/photos?id=newsml.afp.com.20220922T015329Z.doc-32jv4yv&type=photo">Frederic Brown/AFP</a></span>
</figcaption>
</figure>
<p>However, the foundations of quantum theory tell us that if the values of these probability amplitudes are “measured” by another physical system, then the superposition is destroyed: the qubit relaxes to the value of 1 or 0, thus introducing an error into the calculation.</p>
<p>One concrete example of such a destruction is when the qubit absorbs a photon (a particle of light that is a small packet of energy). If this is the case, it is because it was not in its maximum energy state (since it can absorb energy, that of the photon). The photon, and therefore through it the “environment” of the qubit has therefore indirectly “found” the value of the amplitudes, which destroys the superposition. This is called <a href="https://arxiv.org/abs/quant-ph/0306072">“decoherence”</a>.</p>
<p>[<em>Nearly 80,000 readers look to The Conversation France’s newsletter for expert insights into the world’s most pressing issues</em>. <a href="https://theconversation.com/fr/newsletters/la-newsletter-quotidienne-5?utm_source=inline-70ksignup">Sign up now</a>]</p>
<p>Generally speaking, the challenge is to ensure that the qubits are sufficiently isolated to avoid any information leakage: we can’t allow a photon or another particle to disturb our qubit. This is a challenge because the qubits must also be controllable: they cannot be completely isolated.</p>
<p>This lack of protection is the main source of error in qubit-based calculations. For example, one of the most mature qubit technologies runs into an <a href="https://arxiv.org/abs/1905.13641">error every 1,000 operations</a>. When you consider that <a href="https://arxiv.org/abs/1905.09749">it takes 10¹³ operations</a> for a typical quantum algorithm, you can see that this is far too many.</p>
<h2>Preserving superpositions has an energy cost</h2>
<p>The energy cost of computing a quantum computer will mostly come from this need for “protection of the quantum data”. For example, it is often necessary to set the qubit environment close to absolute zero (-273°C) to ensure that no photons populate this environment, avoiding the problem mentioned above. This is a very energy-intensive process.</p>
<p>Some other techniques, such as <a href="https://www.pourlascience.fr/sd/technologie/les-codes-correcteurs-d-erreurs-quantiques-23810.php">quantum error correction</a>, also preserve quantum information, and can improve the fidelity of operations. However, in addition to the challenges they raise, these techniques also incur a very high energy cost because they involve error detection algorithms, or additional qubits for error detection, etc.</p>
<p>In short, the more accurate we want an operation performed on a qubit to be, the more it will have to be protected, and the more energy we will have to spend for that. There is a very strong link between “error rate” and “energy” in quantum computing. Understanding this link precisely may then allow the design of a very energy efficient computer.</p>
<h2>Is an energy quantum advantage possible?</h2>
<p>Some theoretical studies have been able to calculate the energy cost necessary for the realisation of quantum computers, but in a <a href="https://arxiv.org/abs/2103.16726">non-optimised regime</a>, notably not exploiting the link between error rate and energy, and often with a <a href="https://arxiv.org/abs/2205.12092">simplified model</a> of the computer.</p>
<p>Exploiting this link can lead to powerful optimisations <a href="https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.2.040335">reducing the energy cost of algorithms</a>. In practice, <a href="https://tel.archives-ouvertes.fr/tel-03579666">this requires an interdisciplinary approach</a> including the understanding of the fundamental phenomena inducing decoherence, the modelling of quantum error correction algorithms and codes as well as the whole “engineering” part necessary to control the qubits. One can then calculate the minimal energy cost needed to solve different problems, while aiming at an error probability for the algorithm considered as “acceptable”.</p>
<figure class="align-center ">
<img alt="A disk from a quantum computing system" src="https://images.theconversation.com/files/491081/original/file-20221021-13-3lhsfi.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/491081/original/file-20221021-13-3lhsfi.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=410&fit=crop&dpr=1 600w, https://images.theconversation.com/files/491081/original/file-20221021-13-3lhsfi.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=410&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/491081/original/file-20221021-13-3lhsfi.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=410&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/491081/original/file-20221021-13-3lhsfi.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=515&fit=crop&dpr=1 754w, https://images.theconversation.com/files/491081/original/file-20221021-13-3lhsfi.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=515&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/491081/original/file-20221021-13-3lhsfi.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=515&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A wafer of a D-Wave quantum computing system.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/jurvetson/39188583425">Creative Commons Attribution</a></span>
</figcaption>
</figure>
<p>As <a href="https://arxiv.org/abs/2209.05469">we have seen</a>, for qubits of excellent quality (i.e., of a quality that is still out of reach in practice today), there are tasks for which the quantum computer could spend one hundred times less energy than the best current supercomputers for a comparable calculation time (comparable in the sense that both would be able to solve the task in a reasonable time). This energy gain of a factor of 100 is also indicative: one could imagine saving more energy by carrying out additional optimisations.</p>
<p>This is because a quantum computer computes using fundamentally different processes to a classical computer: the former manipulates qubits and the latter bits. Thus, for the same task and even for the same computing time, the number of operations can be drastically different. Moreover, an operation performed in a quantum computer will involve physical processes that are radically different from implemented on a supercomputer. These two remarks taken together imply that, conceptually, even at equal computation time, even if a quantum logic operation consumes more energy than a classical logic operation, the smaller number of quantum logic operations may mean that the quantum computer will ultimately be much more energy-efficient.</p>
<p>Of course, this example comes from theoretical calculations, based on sometimes highly optimistic assumptions. However, it does seem to indicate that one of the primary advantages of quantum computing <a href="https://quantum-energy-initiative.org/">may well be energetic before it is computational</a>.</p><img src="https://counter.theconversation.com/content/191989/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marco Fellous-Asiani ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>
Recent suggests quantum computers could solve problems with breathtaking speed by comparison to current supermodels.
Marco Fellous-Asiani, Post-doctorant en information quantique au Centre of New Technologies, University of Warsaw
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/192367
2022-10-20T08:06:51Z
2022-10-20T08:06:51Z
Noise in the brain enables us to make extraordinary leaps of imagination. It could transform the power of computers too
<figure><img src="https://images.theconversation.com/files/490334/original/file-20221018-4769-ep7hqv.gif?ixlib=rb-1.1.0&rect=207%2C0%2C1587%2C992&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/video/clip-1039258709-ai-artificial-intelligence-digital-network-technologies-concepts">Shutterstock</a></span></figcaption></figure><p>We all have to make hard decisions from time to time. The hardest of my life was whether or not to change research fields after my PhD, from fundamental physics to climate physics. I had job offers that could have taken me in either direction – one to join Stephen Hawking’s <a href="http://www.damtp.cam.ac.uk/research/gr/about-us">Relativity and Gravitation Group</a> at Cambridge University, another to join the <a href="https://www.metoffice.gov.uk/">Met Office</a> as a scientific civil servant.</p>
<p>I wrote down the pros and cons of both options as one is supposed to do, but then couldn’t make up my mind at all. Like <a href="https://en.wikipedia.org/wiki/Buridan%27s_ass">Buridan’s donkey</a>, I was unable to move to either the bale of hay or the pail of water. It was a classic case of paralysis by analysis.</p>
<p>Since it was doing my head in, I decided to try to forget about the problem for a couple of weeks and get on with my life. In that intervening time, my unconscious brain decided for me. I simply walked into my office one day and the answer had somehow become obvious: I would make the change to studying the weather and climate.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/uncharted-brain-decoding-dementia-a-three-part-series-to-read-and-listen-to-193162">Uncharted Brain: Decoding Dementia – a three-part series to read and listen to</a>
</strong>
</em>
</p>
<hr>
<p>More than four decades on, I’d make the same decision again. My fulfilling career has included developing a <a href="https://www.ecmwf.int/en/about/media-centre/news/2022/symposium-prof-tim-palmer-take-place-5-and-6-december">new, probabilistic way of forecasting weather and climate</a> which is helping humanitarian and disaster relief agencies make better decisions ahead of extreme weather events. (This and many other aspects are described in my new book, <a href="https://global.oup.com/academic/product/the-primacy-of-doubt-9780192843593?lang=en&cc=gb">The Primacy of Doubt</a>.)</p>
<p>But I remain fascinated by what was going on in my head back then, which led my subconscious to make a life-changing decision that my conscious could not. Is there something to be understood here not only about how to make difficult decisions, but about how humans make the leaps of imagination that characterise us as such a creative species? I believe the answer to both questions lies in a better understanding of the extraordinary power of noise.</p>
<h2>Imprecise supercomputers</h2>
<p>I went from the pencil-and-paper mathematics of Einstein’s theory of general relativity to running complex climate models on some of the world’s biggest supercomputers. Yet big as they were, they were never big enough – the real climate system is, after all, very complex.</p>
<p>In the early days of my research, one only had to wait a couple of years and top-of-the-range supercomputers would get twice as powerful. This was the era where <a href="https://www.pcmag.com/encyclopedia/term/transistor#:%7E:text=In%20the%20digital%20world%2C%20a,or%20even%20billions%20of%20transistors.">transistors</a> were getting smaller and smaller, allowing more to be crammed on to each microchip. The consequent doubling of computer performance for the same power every couple of years was known as <a href="https://www.investopedia.com/terms/m/mooreslaw.asp">Moore’s Law</a>.</p>
<hr>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><strong><em>This story is part of Conversation Insights</em></strong>
<br><em>The Insights team generates <a href="https://theconversation.com/uk/topics/insights-series-71218">long-form journalism</a> and is working with academics from different backgrounds who have been engaged in projects to tackle societal and scientific challenges.</em></p>
<hr>
<p>There is, however, only so much miniaturisation you can do before the transistor starts becoming unreliable in its key role as an on-off switch. Today, with transistors starting to approach <a href="https://spectrum.ieee.org/smallest-transistor-one-carbon-atom">atomic size</a>, we have pretty much reached the limit of Moore’s Law. To achieve more number-crunching capability, computer manufacturers must bolt together more and more computing cabinets, each one crammed full of chips.</p>
<p>But there’s a problem. Increasing number-crunching capability this way requires a lot more electric power – modern supercomputers the size of tennis courts consume tens of megawatts. I find it something of an embarrassment that we need so much energy to try to accurately predict the effects of climate change.</p>
<p>That’s why I became interested in how to construct a more accurate climate model <em>without</em> consuming more energy. And at the heart of this is an idea that sounds counterintuitive: by adding random numbers, or “noise”, to a climate model, we can actually make it more accurate in predicting the weather.</p>
<h2>A constructive role for noise</h2>
<p>Noise is usually seen as a nuisance – something to be minimised wherever possible. In telecommunications, we speak about trying to maximise the “signal-to-noise ratio” by boosting the signal or reducing the background noise as much as possible. However, in <a href="https://en.wikipedia.org/wiki/Nonlinear_system">nonlinear systems</a>, noise can be your friend and actually contribute to boosting a signal. (A <a href="http://kolibri.teacherinabox.org.au/modules/en-boundless/www.boundless.com/algebra/textbooks/boundless-algebra-textbook/conic-sections-341/nonlinear-systems-of-equations-and-inequalities-52/models-involving-nonlinear-systems-of-equations-222-6108/index.html#:%7E:text=Some%20other%20real%2Dworld%20examples,is%20somewhere%20on%20a%20circle.">nonlinear system</a> is one whose output does not vary in direct proportion to the input. You will likely be very happy to win £100 million on the lottery, but probably not twice as happy to win £200 million.) </p>
<p>Noise can, for example, help us find the maximum value of a complicated curve such as in Figure 1, below. There are many situations in the physical, biological and social sciences as well as in engineering where we might need to find such a maximum. In my field of meteorology, the process of finding the best initial conditions for a global weather forecast involves identifying the maximum point of a very <a href="https://en.wikipedia.org/wiki/Numerical_weather_prediction">complicated meteorological function</a>.</p>
<p><strong>Figure 1</strong></p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/490372/original/file-20221018-8262-yoxe8v.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A curve with multiple local peaks and troughs" src="https://images.theconversation.com/files/490372/original/file-20221018-8262-yoxe8v.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/490372/original/file-20221018-8262-yoxe8v.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=292&fit=crop&dpr=1 600w, https://images.theconversation.com/files/490372/original/file-20221018-8262-yoxe8v.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=292&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/490372/original/file-20221018-8262-yoxe8v.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=292&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/490372/original/file-20221018-8262-yoxe8v.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=366&fit=crop&dpr=1 754w, https://images.theconversation.com/files/490372/original/file-20221018-8262-yoxe8v.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=366&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/490372/original/file-20221018-8262-yoxe8v.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=366&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A curve with multiple local peaks and troughs.</span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>However, employing a “<a href="https://en.wikipedia.org/wiki/Deterministic_algorithm">deterministic algorithm</a>” to locate the global maximum doesn’t usually work. This type of algorithm will typically get stuck at a local peak (for example at point <strong>a</strong>) because the curve moves downwards in both directions from there.</p>
<p>An answer is to use a technique called “<a href="https://en.wikipedia.org/wiki/Simulated_annealing">simulated annealing</a>” – so called because of its similarities with (<a href="https://en.wikipedia.org/wiki/Annealing_(materials_science)">annealing</a>), the heat treatment process that changes the properties of metals. Simulated annealing, which employs noise to get round the issue of getting stuck at local peaks, has been used to solve many problems including the classic <a href="https://optimization.mccormick.northwestern.edu/index.php/Traveling_salesman_problems">travelling salesman puzzle</a> of finding the shortest path between a <a href="https://optimization.mccormick.northwestern.edu/index.php/File:48StatesTSP.png">large number of cities on a map</a>.</p>
<p>Figure 1 shows a possible route to locating the curve’s global maximum (point <strong>9</strong>) by using the following criteria:</p>
<ul>
<li><p>If a randomly chosen point is higher than the current position on the curve, then the new point is always moved to.</p></li>
<li><p>If it is lower than the current position, the suggested point isn’t necessarily rejected. It depends whether the new point is a lot lower or just a little lower.</p></li>
</ul>
<p>However, the decision to move to a new point also depends on how long the analysis has been running. Whereas in the early stages, random points quite a bit lower than the current position may be accepted, in later stages only those that are higher or just a tiny bit lower are accepted.</p>
<p>The technique is known as simulated annealing because early on – like hot metal in the early phase of cooling – the system is pliable and changeable. Later in the process – like cold metal in the late phase of cooling – it is almost rigid and unchangeable.</p>
<h2>How noise can help climate models</h2>
<p>Noise was introduced into <a href="https://www.metoffice.gov.uk/weather/climate/science/climate-modelling">comprehensive weather and climate models</a> around 20 years ago. A key reason was to represent model uncertainty in our ensemble weather forecasts – but it turned out that adding noise also reduced some of the biases the models had, making them more accurate simulators of weather and climate.</p>
<p>Unfortunately, these models require huge supercomputers and a lot of energy to run them. They divide the world into small gridboxes, with the atmosphere and ocean within each assumed to be constant – which, of course, it isn’t. The horizontal scale of a typical gridbox is around 100km – so one way of making a model more accurate is to reduce this distance to 50km, or 10km or 1km. However, halving the volume of a gridbox increases the computational cost of running the model by up to a factor of 16, meaning it consumes a lot more energy.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/-fkCo_trbT8?wmode=transparent&start=1" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Here again, noise offered an appealing alternative. The <a href="https://www.nature.com/articles/s42254-019-0062-2">proposal</a> was to use it to represent the unpredictable (and unmodellable) variations in small-scale climatic processes like turbulence, cloud systems, ocean eddies and so on. I argued that adding noise could be a way of boosting accuracy without having to incur the enormous computational cost of reducing the size of the gridboxes. For example, <a href="https://journals.ametsoc.org/view/journals/clim/34/11/JCLI-D-20-0507.1.xml">as has now been verified</a>, adding noise to a climate model increases the likelihood of producing extreme hurricanes – reflecting the potential reality of a world whose weather is growing more extreme due to climate change.</p>
<p>The computer hardware we use for this modelling is inherently noisy – electrons travelling along wires in a computer move in partly random ways due to its warm environment. Such randomness is called “thermal noise”. Could we save even more energy by tapping into it, rather than having to use software to generate pseudo-random numbers? To me, low-energy <a href="https://www.nature.com/articles/526032a">“imprecise” supercomputers</a> that are inherently noisy looked like a win-win proposal. </p>
<p>But not all of my colleagues were convinced. They were uncomfortable that computers might not give the same answers from one day to the next. To try to persuade them, I began to think about other real-world systems that, because of limited energy availability, also use noise that is generated within their hardware. And I stumbled on the human brain.</p>
<h2>Noise in the brain</h2>
<p>Every second of the waking day, our eyes alone send gigabytes of data to the brain. That’s not much different to the amount of data a climate model produces each time it outputs data to memory.</p>
<p>The brain has to process this data and somehow make sense of it. If it did this using the power of a supercomputer, that would be impressive enough. But it does it using one millionth of that power, about 20W instead of 20MW – what it takes to power a lightbulb. Such energy efficiency is mind-bogglingly impressive. How on Earth does the brain do it?</p>
<p>An adult brain contains some 80 billion neurons. Each neuron has a long slender biological cable – the axon – along which electrical impulses are transmitted from one set of neurons to the next. But these impulses, which collectively describe information in the brain, have to be boosted by protein “transistors” positioned at regular intervals along the axons. Without them, the signal would dissipate and be lost.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/489566/original/file-20221013-18-isqtqg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Neurons and axons in the brain." src="https://images.theconversation.com/files/489566/original/file-20221013-18-isqtqg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/489566/original/file-20221013-18-isqtqg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/489566/original/file-20221013-18-isqtqg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/489566/original/file-20221013-18-isqtqg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/489566/original/file-20221013-18-isqtqg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=425&fit=crop&dpr=1 754w, https://images.theconversation.com/files/489566/original/file-20221013-18-isqtqg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=425&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/489566/original/file-20221013-18-isqtqg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=425&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Brain neurons and axons under a microscope.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/neurons-brain-on-white-background-307043336">Shutterstock</a></span>
</figcaption>
</figure>
<p>The energy for these boosts ultimately comes from an organic compound in the blood called ATP (adenosine triphosphate). This enables electrically charged atoms of sodium and potassium (ions) to be pushed through small channels in the neuron walls, creating electrical voltages which, much like those in silicon transistors, amplify the neuronal electric signals as they travel along the axons.</p>
<p>With 20W of power spread across tens of billions of neurons, the voltages involved are tiny, <a href="https://pubmed.ncbi.nlm.nih.gov/25142940/">as are the axon cables</a>. And there is evidence that axons with a diameter less than about 1 micron (which most in the brain are) are <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2631351/">susceptible to noise</a>. In other words, the brain is a noisy system.</p>
<p>If this noise simply created unhelpful “brain fog”, one might wonder why we evolved to have so many slender axons in our heads. Indeed, there are benefits to having fatter axons: the signals propagate along them faster. If we still needed fast reaction times to escape predators, then slender axons would be <a href="https://www.mpg.de/15409874/axons-cmtm6">disadvantageous</a>. However, developing communal ways of defending ourselves against enemies may have reduced the need for fast reaction times, leading to an evolutionary trend towards thinner axons.</p>
<p>Perhaps, serendipitously, evolutionary mutations that further increased neuron numbers and reduced axon sizes, keeping overall energy consumption the same, made the brain’s neurons more susceptible to noise. And there is mounting evidence that this had another remarkable effect: it encouraged in humans the ability to solve problems that required leaps in imagination and creativity.</p>
<p>Perhaps we only truly became Homo Sapiens when significant noise began to appear in our brains?</p>
<h2>Putting noise in the brain to good use</h2>
<p>Many animals have developed creative approaches to solving problems, but there is nothing to compare with a Shakespeare, a Bach or an Einstein in the animal world.</p>
<p>How do creative geniuses come up with their ideas? Here’s a quote from <a href="https://simonsingh.net/books/fermats-last-theorem/the-whole-story/">Andrew Wiles</a>, perhaps the most famous mathematician alive today, about the time leading up to his celebrated proof of the maths problem (misleadingly) known as Fermat’s Last Theorem:</p>
<blockquote>
<p>When you reach a real impasse, then routine mathematical thinking is of no use to you. Leading up to that kind of new idea, there has to be a long period of tremendous focus on the problem without any distraction. You have to really think about nothing but that problem – just concentrate on it. And then you stop. [At this point] there seems to be a period of relaxation during which the subconscious appears to take over – and it’s during this time that some new insight comes.</p>
</blockquote>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/6ymTZEeTjI8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">BBC’s Horizon unpicks Andrew Wiles’s novel approach to solving Fermat’s Theorem.</span></figcaption>
</figure>
<p>This notion seems universal. Physics Nobel Laureate <a href="https://sciworthy.com/the-connection-between-black-holes-and-einsteins-theory-of-relativity/#:%7E:text=Penrose%20had%20a%20moment%20of,be%20possible%20in%20all%20systems.">Roger Penrose</a> has spoken about his “Eureka moment” when crossing a busy street with a colleague (perhaps reflecting on their conversation while also looking out for oncoming traffic). For the father of chaos theory <a href="https://www.huffpost.com/entry/imagination-and-the-imagi_b_8178538">Henri Poincaré</a>, it was catching a bus.</p>
<p>And it’s not just creativity in mathematics and physics. Comedian John Cleese, of Monty Python fame, makes much the same point about artistic creativity – it occurs not when you are focusing hard on your trade, but when you relax and let your unconscious mind wander.</p>
<p>Of course, not all the ideas that bubble up from your subconscious are going to be Eureka moments. Physicist Michael Berry <a href="https://michaelberryphysics.files.wordpress.com/2013/06/u8.pdf">talks about</a> these subconscious ideas as if they are elementary particles called “claritons”:</p>
<blockquote>
<p>Actually, I do have a contribution to particle physics … the elementary particle of sudden understanding: the “clariton”. Any scientist will recognise the “aha!” moment when this particle is created. But there is a problem: all too frequently, today’s clariton is annihilated by tomorrow’s “anticlariton”. So many of our scribblings disappear beneath a rubble of anticlaritons.</p>
</blockquote>
<p>Here is something we can all relate to: that in the cold light of day, most of our “brilliant” subconscious ideas get annihilated by logical thinking. Only a very, very, very small number of claritons remain after this process. But the ones that do are likely to be gems.</p>
<p>In his renowned book <a href="https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow">Thinking Fast and Slow</a>, the Nobel prize-winning psychologist Daniel Kahneman describes the brain in a binary way. Most of the time when walking, chatting and looking around (in other words when multitasking), it operates in a mode Kahneman calls “system 1” – a rather fast, automatic, effortless mode of operation.</p>
<p>By contrast, when we are thinking hard about a specific problem (unitasking), the brain is in the slower, more deliberative and logical “system 2”. To perform a calculation like 37x13, we have to stop walking, stop talking, close our eyes and even put our hands over our ears. No chance for significant multitasking in system 2.</p>
<p>My <a href="https://www.frontiersin.org/articles/10.3389/fncom.2015.00124/full">2015 paper</a> with computational neuroscientist Michael O’Shea interpreted system 1 as a mode where available energy is spread across a large number of active neurons, and system 2 as where energy is focused on a smaller number of active neurons. The amount of energy per active neuron is therefore much smaller when in the system 1 mode, and it would seem plausible that the brain is more susceptible to noise when in this state. That is, in situations when we are multitasking, the operation of any one of the neurons will be most susceptible to the effects of noise in the brain.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/daniel-kahneman-on-noise-the-flaw-in-human-judgement-harder-to-detect-than-cognitive-bias-160525">Daniel Kahneman on 'noise' – the flaw in human judgement harder to detect than cognitive bias</a>
</strong>
</em>
</p>
<hr>
<p>Berry’s picture of clariton-anticlariton interaction seems to suggest a model of the brain where the noisy system 1 and the deterministic system 2 act in synergy. The anticlariton is the logical analysis that we perform in system 2 which, most of the time, leads us to reject our crazy system 1 ideas.</p>
<p>But sometimes one of these ideas turns out to be not so crazy.</p>
<p>This is reminiscent of how our simulated annealing analysis (Figure 1) works. Initially, we might find many “crazy” ideas appealing. But as we get closer to locating the optimal solution, the criteria for accepting a new suggestion becomes more stringent and discerning. Now, system 2 anticlaritons are annihilating almost everything the system 1 claritons can throw at them – but not quite everything, as Wiles found to his great relief.</p>
<h2>The key to creativity</h2>
<p>If the key to creativity is the synergy between noisy and deterministic thinking, what are some consequences of this?</p>
<p>On the one hand, if you do not have the necessary background information then your analytic powers will be depleted. That’s why Wiles says that leading up to the moment of insight, you have to immerse yourself in your subject. You aren’t going to have brilliant ideas which will revolutionise quantum physics unless you have a pretty good grasp of quantum physics in the first place.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/0kSYdOwVi4Y?wmode=transparent&start=1455" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>But you also need to leave yourself enough time each day to do nothing much at all, to relax and let your mind wander. I tell my research students that if they want to be successful in their careers, they shouldn’t spend every waking hour in front of their laptop or desktop. And swapping it for social media probably doesn’t help either, since you still aren’t really multitasking – each moment you are on social media, your attention is still fixed on a specific issue.</p>
<p>But going for a walk or bike ride or painting a shed probably does help. Personally, I find that driving a car is a useful activity for coming up with new ideas and thoughts – provided you don’t turn the radio on.</p>
<p>When making difficult decisions, this suggests that, having listed all the pros and cons, it can be helpful <em>not</em> to actively think about the problem for a while. I think this explains how, years ago, I finally made the decision to change my research direction – not that I knew it at the time.</p>
<p>Because the brain’s system 1 is so energy efficient, we use it to make the vast majority of the many decisions in our daily lives (some say as many as 35,000) – most of which aren’t that important, like whether to continue putting one leg in front of the other as we walk down to the shops. (I could alternatively stop after each step, survey my surroundings to make sure a predator was not going to jump out and attack me, and on that basis decide whether to take the next step.)</p>
<figure class="align-center ">
<img alt="Young man painting a shed" src="https://images.theconversation.com/files/489578/original/file-20221013-19-uwzexe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/489578/original/file-20221013-19-uwzexe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=337&fit=crop&dpr=1 600w, https://images.theconversation.com/files/489578/original/file-20221013-19-uwzexe.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=337&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/489578/original/file-20221013-19-uwzexe.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=337&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/489578/original/file-20221013-19-uwzexe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=423&fit=crop&dpr=1 754w, https://images.theconversation.com/files/489578/original/file-20221013-19-uwzexe.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=423&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/489578/original/file-20221013-19-uwzexe.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=423&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A key part of creative thinking?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/teenager-has-summer-job-painter-he-2027087039">Blodstrupmoen/Shutterstock</a></span>
</figcaption>
</figure>
<p>However, this system 1 thinking can sometimes lead us to make bad decisions, because we have simply defaulted to this low-energy mode and not engaged system 2 when we should have. How many times do we say to ourselves in hindsight: “Why didn’t I give such and such a decision more thought?”</p>
<p>Of course, if instead we engaged system 2 for every decision we had to make, then we wouldn’t have enough time or energy to do all the other important things we have to do in our daily lives (so the shops may have shut by the time we reach them).</p>
<p>From this point of view, we should not view giving wrong answers to unimportant questions as evidence of irrationality. Kahneman <a href="https://www.semanticscholar.org/paper/Representativeness-revisited%3A-Attribute-in-Kahneman-Frederick/4069615a36c33e61ca309b8ceaeb628a10d441b5?p2df">cites</a> the fact that more than 50% of students at MIT, Harvard and Princeton gave the incorrect answer to this simple question – a bat and ball costs $1.10; the bat costs one dollar more than the ball; how much does the ball cost? – as evidence of our irrationality. The correct answer, if you think about it, <a href="https://www.hitc.com/en-gb/2020/05/31/baseball-bat-and-ball-cost-1-10-riddle-answer-explained/">is 5 cents</a>. But system 1 screams out ten cents.</p>
<p>If we were asked this question on pain of death, one would hope we would spend enough thought to come up with the correct answer. But if we were asked the question as part of an anonymous after-class test, when we had much more important things to spend time and energy doing, then I’d be inclined to think of it as irrational to give the right answer.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/curious-kids-how-does-our-brain-know-to-make-immediate-decisions-155532">Curious Kids: how does our brain know to make immediate decisions?</a>
</strong>
</em>
</p>
<hr>
<p>If we had 20MW to run the brain, we could spend part of it solving unimportant problems. But we only have 20W and we need to use it carefully. Perhaps it’s the 50% of MIT, Harvard and Princeton students who gave the wrong answer who are really the clever ones.</p>
<p>Just as a climate model with noise can produce types of weather that a model without noise can’t, so a brain with noise can produce ideas that a brain without noise can’t. And just as these types of weather can be exceptional hurricanes, so the idea could end up winning you a Nobel Prize.</p>
<p>So, if you want to increase your chances of achieving something extraordinary, I’d recommend going for that walk in the countryside, looking up at the clouds, listening to the birds cheeping, and thinking about what you might eat for dinner.</p>
<h2>So could computers be creative?</h2>
<p>Will computers, one day, be as creative as Shakespeare, Bach or Einstein? Will they understand the world around us as we do? Stephen Hawking <a href="https://www.bbc.co.uk/news/technology-30290540">famously warned</a> that AI will eventually take over and replace mankind.</p>
<p>However, the best-known advocate of the idea that computers will never understand as we do is Hawking’s old colleague, Roger Penrose. In making his claim, Penrose invokes an important “meta” theorem in mathematics known as <a href="https://www.theguardian.com/science/2022/jan/10/can-you-solve-it-godels-incompleteness-theorem#:%7E:text=In%201931%2C%20the%20Austrian%20logician,statements%20that%20cannot%20be%20proved.">Gödel’s theorem</a>, which says there are mathematical truths that can’t be proven by deterministic algorithms.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/hXgqik6HXc0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>There is a simple way of illustrating Gödel’s theorem. Suppose we make a list of all the most important mathematical theorems that have been proven since the time of the ancient Greeks. First on the list would be <a href="https://www-users.cs.york.ac.uk/susan/cyc/p/primeprf.htm#:%7E:text=Assume%20there%20are%20a%20finite,any%20of%20the%20p%20i%20.">Euclid’s proof</a> that there are an infinite number of prime numbers, which requires one really creative step (multiply the supposedly finite number of primes together and add one). Mathematicians would call this a “trick” – shorthand for a clever and succinct mathematical construction.</p>
<p>But is this trick useful for proving important theorems further down the list, like <a href="https://medium.com/not-zero/two-proofs-of-the-irrationality-of-the-square-root-of-2-fca5c38e44c">Pythagoras’s proof</a> that the square root of two cannot be expressed as the ratio of two whole numbers? It’s clearly not; we need another trick for that theorem. Indeed, as you go down the list, you’ll find that a new trick is typically needed to prove each new theorem. It seems there is no end to the number of tricks that mathematicians will need to prove their theorems. Simply loading a given set of tricks on a computer won’t necessarily make the computer creative. </p>
<p>Does this mean mathematicians can breathe easily, knowing their jobs are not going to be taken over by computers? Well maybe not.</p>
<p>I have been arguing that we need computers to be noisy rather than entirely deterministic, “<a href="https://en.wikipedia.org/wiki/Reproducible_builds">bit-reproducible</a>” machines. And noise, especially if it comes from quantum mechanical processes, would break the assumptions of Gödel’s theorem: a noisy computer is <em>not</em> an algorithmic machine in the usual sense of the word.</p>
<p>Does this imply that a noisy computer can be creative? Alan Turing, pioneer of the general-purpose computing machine, believed this was possible, <a href="https://plato.stanford.edu/entries/turing/#:%7E:text=In%20other%20words%20then%2C%20if,makes%20no%20pretence%20at%20infallibility.">suggesting</a> that “if a machine is expected to be infallible then it cannot also be intelligent”. That is to say, if we want the machine to be intelligent then it had better be capable of making mistakes.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/turing-test-why-it-still-matters-123468">Turing Test: why it still matters</a>
</strong>
</em>
</p>
<hr>
<p>Others may argue there is no evidence that simply adding noise will make an otherwise stupid machine into an intelligent one – and I agree, as it stands. Adding noise to a climate model doesn’t automatically make it an intelligent climate model.</p>
<p>However, the type of synergistic interplay between noise and determinism – the kind that sorts the wheat from the chaff of random ideas – has hardly yet been developed in computer codes. Perhaps we could develop a new type of AI model where the AI is trained by getting it to solve simple mathematical theorems using the clariton-anticlariton model; by making guesses and seeing if any of these have value.</p>
<p>For this to be at all tractable, the AI system would need to be trained to focus on “educated random guesses”. (If the machine’s guesses are all uneducated ones, it will take forever to make progress – like waiting for a group of monkeys to type the first few lines of Hamlet.)</p>
<p>For example, in the context of Euclid’s proof that there are an unlimited number of primes, could we train an AI system in such a way that a random idea like “multiply the assumed finite number of primes together and add one” becomes much more likely than the completely useless random idea “add the assumed finite number of primes together and subtract six”? And if a particular guess turns out to be especially helpful, can we train the AI system so that the next guess is a refinement of the last one? </p>
<p>If we can somehow find a way to do this, it could open up modelling to a completely new level that is relevant to all fields of study. And in so doing, we might yet reach the so-called “<a href="https://en.wikipedia.org/wiki/Technological_singularity">singularity</a>” when machines take over from humans. But only when AI developers fully embrace the constructive role of noise – as it seems the brain did many thousands of years ago.</p>
<p>For now, I feel the need for another walk in the countryside. To blow away some fusty old cobwebs – and perhaps sow the seeds for some exciting new ones.</p>
<hr>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=112&fit=crop&dpr=1 600w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=112&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=112&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=140&fit=crop&dpr=1 754w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=140&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=140&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em>For you: more from our <a href="https://theconversation.com/uk/topics/insights-series-71218?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">Insights series</a>:</em></p>
<ul>
<li><p><em><a href="https://theconversation.com/the-magic-of-touch-how-deafblind-people-taught-us-to-see-the-world-differently-during-covid-191698?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">The magic of touch: how deafblind people taught us to ‘see’ the world differently during COVID
</a></em></p></li>
<li><p><em><a href="https://theconversation.com/the-human-body-has-37-trillion-cells-if-we-can-work-out-what-they-all-do-the-results-could-revolutionise-healthcare-185654?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">The human body has 37 trillion cells. If we can work out what they all do, the results could revolutionise healthcare
</a></em></p></li>
<li><p><em><a href="https://theconversation.com/the-inside-story-of-recovery-how-the-worlds-largest-covid-19-trial-transformed-treatment-and-what-it-could-do-for-other-diseases-184772?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">The inside story of Recovery: how the world’s largest COVID-19 trial transformed treatment – and what it could do for other diseases
</a></em></p></li>
</ul>
<p><em>To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. <a href="https://theconversation.com/uk/newsletters/the-daily-newsletter-2?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK"><strong>Subscribe to our newsletter</strong></a>.</em></p><img src="https://counter.theconversation.com/content/192367/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tim Palmer receives funding from The Royal Society and from the European Research Council. His book, The Primacy of Doubt is published by Oxford University Press.
</span></em></p>
From more accurate climate modelling to the prospect of truly creative computers, the brain’s use of noise has a lot to teach us.
Tim Palmer, Royal Society Research Professor, University of Oxford
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/188375
2022-08-09T06:36:18Z
2022-08-09T06:36:18Z
A new Australian supercomputer has already delivered a stunning supernova remnant pic
<figure><img src="https://images.theconversation.com/files/478042/original/file-20220808-16-t0qvkx.png?ixlib=rb-1.1.0&rect=4%2C4%2C1453%2C1049&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">CSIRO ASKAP Science Data Processing/Pawsey Supercomputing Research Centre</span>, <span class="license">Author provided</span></span></figcaption></figure><p>Within 24 hours of accessing the first stage of Australia’s newest supercomputing system, researchers have processed a series of radio telescope observations, including a highly detailed image of a supernova remnant.</p>
<p>The very high data rates and the enormous data volumes from new-generation radio telescopes such as <a href="https://www.csiro.au/askap">ASKAP</a> (Australian Square Kilometre Array Pathfinder) need highly capable software running on supercomputers. This is where the Pawsey Supercomputing Research Centre comes into play, with a <a href="https://pawsey.org.au/systems/setonix/">newly launched supercomputer called Setonix</a> – named after Western Australia’s favourite animal, <a href="https://australian.museum/learn/animals/mammals/quokka/">the quokka</a> (<em>Setonix brachyurus</em>).</p>
<p>ASKAP, which consists of 36 dish antennas that work together as one telescope, is operated by Australia’s national science agency CSIRO; the observational data it gathers are transferred via high-speed optical fibres to the Pawsey Centre for processing and converting into science-ready images.</p>
<p>In a major milestone on the path to full deployment, we have now demonstrated the integration of our processing software ASKAPsoft on Setonix, complete with stunning visuals.</p>
<figure class="align-center ">
<img alt="A bubbling red ball hangs in a dark background surrounded by points of light" src="https://images.theconversation.com/files/478042/original/file-20220808-16-t0qvkx.png?ixlib=rb-1.1.0&rect=4%2C4%2C1453%2C1049&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/478042/original/file-20220808-16-t0qvkx.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=433&fit=crop&dpr=1 600w, https://images.theconversation.com/files/478042/original/file-20220808-16-t0qvkx.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=433&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/478042/original/file-20220808-16-t0qvkx.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=433&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/478042/original/file-20220808-16-t0qvkx.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=544&fit=crop&dpr=1 754w, https://images.theconversation.com/files/478042/original/file-20220808-16-t0qvkx.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=544&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/478042/original/file-20220808-16-t0qvkx.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=544&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">CSIRO ASKAP Science Data Processing/Pawsey Supercomputing Research Centre</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<h2>Traces of a dying star</h2>
<p>An exciting outcome of this exercise has been a fantastic image of a cosmic object known as a supernova remnant, <a href="http://simbad.u-strasbg.fr/simbad/sim-id?Ident=SNR+G261.9%2B05.5">G261.9+5.5</a>.</p>
<p>Estimated to be more than a million years old, and located 10,000-15,000 light-years away from us, this object in our galaxy was <a href="https://doi.org/10.1071/PH670297">first classified</a> as a supernova remnant by CSIRO radio astronomer Eric R. Hill in 1967, using observations from CSIRO’s <a href="https://www.csiro.au/en/about/facilities-collections/atnf/parkes-radio-telescope">Parkes Radio Telescope, Murriyang</a>.</p>
<p>Supernova remnants (SNRs) are the remains of powerful explosions from dying stars. The ejected material from the explosion ploughs outwards into the surrounding interstellar medium at supersonic speeds, sweeping up gas and any material it encounters along the way, compressing and heating them up in the process.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/curious-kids-if-a-star-explodes-will-it-destroy-earth-105127">Curious Kids: If a star explodes, will it destroy Earth?</a>
</strong>
</em>
</p>
<hr>
<p>Additionally, the shockwave would also compress the interstellar magnetic fields. The emissions we see in our radio image of G261.9+5.5 are from highly energetic electrons trapped in these compressed fields. They bear information about the history of the exploded star and aspects of the surrounding interstellar medium.</p>
<p>The structure of this remnant revealed in the deep ASKAP radio image opens up the possibility of studying this remnant and the physical properties (such as magnetic fields and high-energy electron densities) of the interstellar medium in unprecedented detail.</p>
<figure class="align-center ">
<img alt="A cut, grey-brown marsupial curiously looking at the camera" src="https://images.theconversation.com/files/478190/original/file-20220809-26-o4j68c.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/478190/original/file-20220809-26-o4j68c.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/478190/original/file-20220809-26-o4j68c.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/478190/original/file-20220809-26-o4j68c.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/478190/original/file-20220809-26-o4j68c.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/478190/original/file-20220809-26-o4j68c.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/478190/original/file-20220809-26-o4j68c.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The new supercomputer is named after the iconic quokka.</span>
<span class="attribution"><span class="source">Chia Chuin Wong/Shutterstock</span></span>
</figcaption>
</figure>
<h2>Putting a supercomputer through its paces</h2>
<p>The image of SNR G261.9+05.5 might be beautiful to look at, but the processing of data from ASKAP’s astronomy surveys is also a great way to stress-test the supercomputer system, including the hardware and the processing software.</p>
<p>We included the supernova remnant’s dataset for our initial tests because its complex features would increase the processing challenges.</p>
<p>Data processing even with a supercomputer is a complex exercise, with different processing modes triggering various potential issues. For example, the image of the SNR was made by combining data gathered at hundreds of different frequencies (or colours, if you like), allowing us to get a composite view of the object.</p>
<p>But there is a treasure trove of information hidden in the individual frequencies as well. Extracting that information often requires making images at each frequency, requiring more computing resources and more digital space to store.</p>
<p>While Setonix has adequate resources for such intense processing, a key challenge would be to establish the stability of the supercomputer when lashed with such enormous amounts of data day in and day out. </p>
<p>Key to this quick first demonstration was the close collaboration between the Pawsey Centre and the ASKAP science data processing team members. Our teamwork enabled all of us to better understand these challenges and quickly find solutions.</p>
<p>These results mean we will be able to unearth more from the ASKAP data, for example.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-australias-supercomputers-crunched-the-numbers-to-guide-our-bushfire-and-pandemic-response-141047">How Australia's supercomputers crunched the numbers to guide our bushfire and pandemic response</a>
</strong>
</em>
</p>
<hr>
<h2>More to come</h2>
<p>But this is only the first of two installation stages for Setonix, with the second expected to be completed later this year.</p>
<p>This will allow data teams to process more of the vast amounts of data coming in from many projects in a fraction of the time. In turn, it will not only enable researchers to better understand our Universe but will undoubtedly uncover new objects hidden in the radio sky. The variety of scientific questions that Setonix will allow us to explore in shorter time-frames opens up so many possibilities.</p>
<p>This increase in computational capacity benefits not just ASKAP, but all Australia-based researchers in all fields of science and engineering that can access Setonix. </p>
<p>While the supercomputer is ramping up to full operations, so is ASKAP, which is currently wrapping up a series of pilot surveys and will soon undertake even larger and deeper surveys of the sky.</p>
<p>The supernova remnant is just one of many features we’ve now revealed, and we can expect many more stunning images, and the discovery of many new celestial objects, to come soon.</p><img src="https://counter.theconversation.com/content/188375/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Radio telescopes produce enormous amounts of data, and we need immense computing power to produce even a single image like this one.
Wasim Raja, Research scientist, CSIRO
Pascal Jahan Elahi, Supercomputing applications specialist, Pawsey Supercomputing Research Centre, CSIRO
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/169184
2021-10-11T08:01:33Z
2021-10-11T08:01:33Z
The Human Brain Project: six achievements of Europe’s largest neuroscience programme
<figure><img src="https://images.theconversation.com/files/425342/original/file-20211007-17-rqil29.jpg?ixlib=rb-1.1.0&rect=45%2C26%2C1549%2C1161&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Connections between brain cells.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/132318516@N08/22798807131">NIH Image Gallery/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>While humans have walked on the Moon and sent probes all over the solar system, our understanding of our own brain is still severely lacking. We <a href="http://www.neuroscience.cam.ac.uk/Uploads/Sahakian%20NBR%20Article.pdf">do not have complete knowledge</a> of how brain structure, chemicals and connectivity interact to produce our thoughts and behaviours. </p>
<p>But this isn’t from an absence of ambition. It is nearly eight years since the start of <a href="https://www.sciencedirect.com/science/article/pii/S0896627316307966">the Human Brain Project (HBP) in Europe</a>, which aims to unravel the brain’s mysteries. After a <a href="https://theconversation.com/after-years-of-conflict-huge-project-could-help-scientists-decipher-the-brain-42581">difficult start</a>, the project has made substantial discoveries and innovation, relevant for tackling clinical disorders, as well as <a href="https://www.humanbrainproject.eu/en/science/highlights-and-achievements/">technological advances</a> – and it has two more years to go. </p>
<p>It has also created <a href="https://ebrains.eu/">EBRAINS</a>, an open research infrastructure built on the scientific advances and tools developed by the project’s research teams, and making them available to the scientific community via a shared digital platform – a new achievement for collaborative research and instrumental in the achievements listed below. </p>
<h2>1. Human brain atlas</h2>
<p>The project has created a unique multilevel human brain atlas based on several aspects of brain organisation, including its structure on the smallest of scales, its function and connectivity. This atlas provides a large number of tools <a href="https://www.science.org/doi/abs/10.1126/science.abb4588">to visualise data</a> and work with them.</p>
<figure class="align-center ">
<img alt="Image of an atlas visualisation." src="https://images.theconversation.com/files/425443/original/file-20211008-27-nrvm2p.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/425443/original/file-20211008-27-nrvm2p.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=360&fit=crop&dpr=1 600w, https://images.theconversation.com/files/425443/original/file-20211008-27-nrvm2p.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=360&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/425443/original/file-20211008-27-nrvm2p.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=360&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/425443/original/file-20211008-27-nrvm2p.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=452&fit=crop&dpr=1 754w, https://images.theconversation.com/files/425443/original/file-20211008-27-nrvm2p.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=452&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/425443/original/file-20211008-27-nrvm2p.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=452&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Visualisation using the atlas.</span>
<span class="attribution"><span class="source">Forschungszentrum Juelich / HBP</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>Researchers can automatically extract data from the atlas using a special tool to run a simulation for modelling the brains of specific patients. This can help to inform clinicians of the optimal treatment option. </p>
<h2>2. Synapses in the hippocampus</h2>
<p>Using electron microscopy, a technique to study brain tissue at ultrahigh resolution, researchers <a href="https://elifesciences.org/articles/57013">have published</a> detailed 3D-maps of around 25,000 synapses – electrical and chemical signals between brain cells – in the human hippocampus. This region of the brain is involved in memory, learning and spatial navigation, and one of the first areas to be <a href="https://actaneurocomms.biomedcentral.com/articles/10.1186/s40478-021-01246-y">damaged early in Alzheimer’s disease</a>. </p>
<p>The Human Brain Project is the first to provide a detailed picture of synaptic structure in this important area of the brain. This could allow for a better understanding of diseases such as dementia, as well as aid in the development of computational models of the brain.</p>
<h2>3. Robotic hands</h2>
<p>Due to its complexity, the human hand is one of the most difficult body parts to imitate. While even small children can pick up and manipulate items, such as a cup of water, this has been a very difficult problem for robot hands. The <a href="https://www.shadowrobot.com/">Shadow Robot Company</a>, which participates in the Human Brain Project, designs and develops highly dexterous robotic hands to imitate the human hand as closely as possible. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/3rZYn62OId8?wmode=transparent&start=1" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>One of the company’s recent inventions is the <a href="https://youtu.be/3rZYn62OId8">world’s first touch-based telerobot hand</a> (see video above). The robotic hand has 129 sensors and 24 joints and has very similar movements to those of human hands. This allows for the ideal use of force and much finer manipulation of objects than that of previous robot hands. </p>
<p>This invention will be important in prosthetics, and also in industry including in manufacturing, space exploration, medicine and work in hazardous environments where you wouldn’t want to use your real hands – such as nuclear waste and with biologically dangerous substances. The HBP scientists and Shadowrobot work together to give the robotic hands an even more human-like dexterity, with the help of neural networks.</p>
<h2>4. A neuro-inspired computer</h2>
<p>The human brain comprises nearly 100 billion interconnected brain cells, which is part of the reason it is so difficult to model and understand. Innovative computing has helped to further our understanding by simulating the exchange of signals between neurons, but even the best software run on the fastest supercomputers to date can only simulate 1% of the human brain. </p>
<p>The million-processor-core <a href="https://www.youtube.com/watch?v=EhPpxsK2Ia0">Spiking Neural Network Architecture</a> or “SpiNNaker” machine boasts 100 million transistors on each of its <a href="https://www.humanbrainproject.eu/en/silicon-brains/how-we-work/hardware/">30,000 chips</a>. One such chip can simulate 16,000 neurons and 8 million synapses in real time. This is comparable or even better than the best brain-simulation supercomputer software currently used for neural-signaling research so far.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/EhPpxsK2Ia0?wmode=transparent&start=3" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>But this is only the beginning. The unique SpiNNaker doesn’t communicate by sending large amounts of information from point A to B via a standard network. Instead it works more like the human brain and sends billions of small amounts of information simultaneously to thousands of different destinations, completely rethinking the way traditional computers work. </p>
<p>The SpiNNakker has the potential to overcome speed and power consumption problems of conventional supercomputers – something that is much needed if we are to crack the enigma of the human brain. Ultimately, it could advance our understanding of neural processing in the brain, including in learning and neurological diseases such as Alzheimer’s. </p>
<h2>5. Virtual epileptic patient</h2>
<p>Another important EBRAINS application was the development of a <a href="https://www.sciencedirect.com/science/article/pii/S1053811916300891">Virtual Epileptic Patient (VEP)</a>. This is a computer program based on personalised brain network models from individual patients. This is achieved by integrating brain connectivity areas responsible for seizures and lesions for individual patients, detected by MRI. </p>
<p>Currently, clinicians use electroencephalogram (EEG), which provides a recording of brain activity, and helps to identify when and where a seizure begins. However, this information alone does not tell the clinician all that they need for determining the type of seizure and making the best decisions in regard to treatment. </p>
<figure class="align-center ">
<img alt="Image of a brain scanner." src="https://images.theconversation.com/files/425446/original/file-20211008-24-d8m4zj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/425446/original/file-20211008-24-d8m4zj.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=423&fit=crop&dpr=1 600w, https://images.theconversation.com/files/425446/original/file-20211008-24-d8m4zj.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=423&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/425446/original/file-20211008-24-d8m4zj.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=423&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/425446/original/file-20211008-24-d8m4zj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=532&fit=crop&dpr=1 754w, https://images.theconversation.com/files/425446/original/file-20211008-24-d8m4zj.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=532&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/425446/original/file-20211008-24-d8m4zj.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=532&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Data from epileptic patients can be used in a computer program.</span>
<span class="attribution"><span class="source">© INS Marseille</span></span>
</figcaption>
</figure>
<p>The model provides a <a href="https://www.youtube.com/watch?v=DBI3t0wiFFw">personalised prediction</a> of the impact of a surgical treatment for a certain patient. The surgeon is then able to evaluate the impact of multiple different therapeutic strategies and determine the best treatment option, with the most successful outcome. The program is close to <a href="https://www.humanbrainproject.eu/en/follow-hbp/news/2021/09/05/the-first-hbp-innovation-award-went-to-the-the-virtual-brain-team-and-the-next-one-is-on-its-way/">commercial release</a>.</p>
<h2>6. Scientific output</h2>
<p>As of September 2021, 1,497 peer-reviewed journal articles, many in high-impact journals, cite the Human Brain Project. For example, in 2018 researchers published impressive work on neurotechnology <a href="https://www.nature.com/articles/s41586-018-0649-2">restoring walking in patients with spinal cord injury</a> in Nature. In 2020, a team published in Science <a href="https://www.science.org/doi/abs/10.1126/science.abb4588">the most comprehensive atlas</a> on the cellular architecture of the brain. In the same year, researchers also discovered <a href="https://www.science.org/doi/abs/10.1126/science.aax6239">a specific type of action potentials</a> in brain cells known as pyramidal cells and disclosed an important memory mechanism in the hippocampus.</p>
<p>A key objective going forward is to develop “foresight”, which is the practice of looking ahead to envision potential future developments and change. Hopefully, we will one day crack the monumental challenge of understanding the human brain and discovering novel treatments for neurological diseases and psychiatric disorders. But it won’t be easy – it is, after all, harder than rocket science.</p>
<p><em>We would like to thank the Director General of the Human Brain Project and the CEO of EBRAINS, <a href="https://www.humanbrainproject.eu/en/about/governance/boards/directorate/pawel-swieboda/">Pawel Swieboda</a>, for contributing to the information on EBRAINS.</em></p><img src="https://counter.theconversation.com/content/169184/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Barbara Jacquelyn Sahakian receives funding from the Wellcome Trust, Leverhulme Trust and Lundbeck Foundation. Her research is conducted within the NIHR MedTech and in vitro diagnostic Co-operative (MIC) and the NIHR Cambridge Biomedical Research Centre (BRC) Mental Health and Neurodegeneration themes. She is a member of the Scientific and Infrastructure Advisory Board of the Human Brain Project. </span></em></p><p class="fine-print"><em><span>Christelle Langley is funded by the Wellcome Trust and the Leverhulme Trust. Her research is conducted within the NIHR MedTech and in vitro diagnostic Co-operative (MIC) and the NIHR Cambridge Biomedical Research Centre (BRC) Mental Health and Neurodegeneration themes.</span></em></p><p class="fine-print"><em><span>Katrin Amunts receives funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3). She is professor for brain research, C. & O. Vogt Institute, University Hospital Düsseldorf at Heine University Düsseldorf, director of the Instiute of Neuroscience and Medicine, Forschungzentrum Jülich, and Scientific Research Director of the Human Brain Project. </span></em></p>
From robotic hands to brain-like computers, the Human Brain Project has produced some intriguing results.
Barbara Jacquelyn Sahakian, Professor of Clinical Neuropsychology, University of Cambridge
Christelle Langley, Postdoctoral Research Associate, Cognitive Neuroscience, University of Cambridge
Katrin Amunts, Professor of Neuroscience, Forschungszentrum Jülich
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/166271
2021-08-18T03:02:19Z
2021-08-18T03:02:19Z
Why bother calculating pi to 62.8 trillion digits? It’s both useless and fascinating
<figure><img src="https://images.theconversation.com/files/416657/original/file-20210818-19-1fq8pt0.jpeg?ixlib=rb-1.1.0&rect=0%2C138%2C4031%2C2879&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shisma/Wikimedia Commons</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>Swiss researchers at the University of Applied Sciences Graubünden this week claimed a new world record for calculating the number of digits of pi – a staggering 62.8 trillion figures. By my estimate, if these digits were printed out they would fill every book in the British Library ten times over. The researchers’ feat of arithmetic took 108 days and 9 hours to complete, and dwarfs the <a href="https://www.guinnessworldrecords.com/world-records/66179-most-accurate-value-of-pi">previous record</a> of 50 trillion figures set in January 2020.</p>
<p>But why do we care?</p>
<p>The mathematical constant pi (π) is the ratio of a circle’s circumference to its diameter, and is approximately 3.1415926536. With only these ten decimal places, we could calculate the circumference of Earth to a precision of less than a millimetre. With 32 decimal places, we could calculate the circumference of our Milky Way galaxy to the precision of the width of a hydrogen atom. And with only 65 decimal places, we would know the size of the observable universe to within a <a href="https://astronomy.swin.edu.au/cosmos/P/Planck+Length">Planck length</a> – the shortest possible measurable distance.</p>
<p>What use, then, are the other 62.79 trillion digits? While the short answer is that they are not scientifically useful at all, mathematicians and computer scientists will be eagerly awaiting the details of this gargantuan computation for a variety of reasons.</p>
<h2>What makes pi so fascinating?</h2>
<p>The concept of pi is simple enough for a primary school student to grasp, yet its digits are notoriously difficult to calculate. A number like 1/7 needs infinitely many decimals to write down - 0.1428571428571… - but the numbers repeat themselves every six places, making it easy to understand. Pi, on the other hand, is an example of an irrational number, in which there are no repeating patterns. Not only is pi irrational, but it is also transcendental, meaning it cannot be defined through any simple equation featuring whole numbers. </p>
<p>Mathematicians around the world have been computing pi since ancient times, but techniques to do so changed dramatically after the 17th century, with the development of calculus and the techniques of infinite series. For example, the Madhava series (named after the Indian-Hindu mathematician <a href="https://en.wikipedia.org/wiki/Madhava_of_Sangamagrama">Madhava of Sangamagrama</a>), says:</p>
<blockquote>
<p>π = 4(1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + …)</p>
</blockquote>
<p>By adding more and more terms, this computation gets closer and closer to the true value of pi. But it takes a long time — after 500,000 terms, it produces only five correct decimal places of pi! </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-a-farm-boy-from-wales-gave-the-world-pi-55917">How a farm boy from Wales gave the world pi</a>
</strong>
</em>
</p>
<hr>
<p>The search for new formulae for pi adds to our mathematical understanding of the number, while also letting mathematicians vie for bragging rights in the quest for more digits. The <a href="https://en.wikipedia.org/wiki/Chudnovsky_algorithm">infinite sum used in the 2020 recordbreaking effort</a> was discovered in 1988 and can calculate 14 new digits of pi for each new term that is added to the sum.</p>
<p>While breaking the record may be one of the key motivators for finding new digits of pi, there are two other important benefits.</p>
<p>The first is the development and testing of supercomputers and new high-precision multiplication algorithms. Optimising the computation of pi leads to computer hardware and software that benefit many other areas of our lives, from accurate weather forecasting to DNA sequencing and even COVID modelling. </p>
<p>The latest computation of pi was 3.5 times as fast as the previous effort, despite the extra 12 trillion decimal places – an impressive increase in supercomputing performance in just 18 months.</p>
<figure class="align-center ">
<img alt="Pi written on roadside concrete fence" src="https://images.theconversation.com/files/416649/original/file-20210818-23-18i7yu7.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/416649/original/file-20210818-23-18i7yu7.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=547&fit=crop&dpr=1 600w, https://images.theconversation.com/files/416649/original/file-20210818-23-18i7yu7.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=547&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/416649/original/file-20210818-23-18i7yu7.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=547&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/416649/original/file-20210818-23-18i7yu7.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=688&fit=crop&dpr=1 754w, https://images.theconversation.com/files/416649/original/file-20210818-23-18i7yu7.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=688&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/416649/original/file-20210818-23-18i7yu7.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=688&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Three point one for the road.</span>
<span class="attribution"><span class="source">Daniel Nydegger/Wikimedia Commons</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>The second is the exploration of the very nature of pi. Despite centuries of research, there are still fundamental unanswered questions about the way its digits behave. It is conjectured that pi is a “normal” number, meaning all possible sequences of digits should appear equally often. </p>
<p>For example, we expect the digit 3 to appear as often as the digit 8, and the digit string “12345” to appear as often as “99999”. But we don’t even know if each decimal digit appears infinitely often in pi, let alone whether there are more complex patterns waiting to be discovered.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/3-14-essential-reads-about-for-pi-day-74022">3.14 essential reads about π for Pi Day</a>
</strong>
</em>
</p>
<hr>
<p>The data for the new pi computation have not yet been released, as the researchers are awaiting confirmation from the Guinness Book of Records. But we hope there will be many mathematically interesting treasures within the numbers. </p>
<p>We will never “finish” computing the digits of pi - there will always be more to find and new records to break. If you don’t happen to own a supercomputer, but you have a thirst for computing decimal digits (and a PhD in mathematics), why not try other interesting irrational numbers like <a href="https://en.wikipedia.org/wiki/Square_root_of_3">√3</a> (only known to 10 billion digits), the <a href="https://en.wikipedia.org/wiki/Generalizations_of_Fibonacci_numbers#Tribonacci_numbers">tribonacci constant</a> (20,000 digits), or the <a href="https://mathworld.wolfram.com/TwinPrimesConstant.html">Twin Prime Constant</a> (1,001 digits). You may not make the morning news, but it’s arguably an easier way to write yourself into the record books.</p><img src="https://counter.theconversation.com/content/166271/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Julia Collins does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Calculating pi with unprecedented accuracy has zero scientific usefulness. But as a show of computing muscle and a mathematical curiosity, it’s endlessly intriguing.
Julia Collins, Lecturer of Mathematics, Edith Cowan University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/144779
2020-08-26T12:22:00Z
2020-08-26T12:22:00Z
The tech field failed a 25-year challenge to achieve gender equality by 2020 – culture change is key to getting on track
<figure><img src="https://images.theconversation.com/files/354705/original/file-20200825-15-h7v84m.jpg?ixlib=rb-1.1.0&rect=0%2C16%2C5499%2C3630&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The tech field has a long way to go to achieve gender parity.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/people-working-in-modern-office-royalty-free-image/878980536?adppopup=true">10'000 Hours/DigitalVision via Getty Images</a></span></figcaption></figure><p>In 1995, pioneering computer scientist <a href="https://anitab.org/about-us/about-anita-borg/">Anita Borg</a> challenged the tech community to a moonshot: <a href="https://www.youtube.com/watch?v=3nImg8vPUe4">equal representation of women in tech by 2020</a>. Twenty-five years later, we’re still far from that goal. In 2018, fewer than 30% of the <a href="https://www.vox.com/2018/4/11/17225574/facebook-tech-diversity-women">employees in tech’s biggest companies</a> and 20% of <a href="https://research.swe.org/2016/08/tenure-tenure-track-faculty-levels/">faculty in university computer science departments</a> were women.</p>
<p>On <a href="https://en.wikipedia.org/wiki/Women%27s_Equality_Day">Women’s Equality Day</a> in 2020, it’s appropriate to revisit Borg’s moonshot challenge. Today, awareness of the gender diversity problem in tech has increased, and professional development programs have improved women’s skills and opportunities. But special programs and “<a href="https://www.sciencemag.org/features/2011/01/fix-system-not-women">fixing women</a>” by improving their skills have not been enough. By and large, the tech field doesn’t need to fix women, it needs to fix itself.</p>
<p>As <a href="https://www.amacad.org/person/francine-d-berman">former head</a> of a national supercomputer center and a data scientist, I know that cultural change is hard but not impossible. It requires organizations to prioritize and promote material, not symbolic, change. It requires sustained effort and shifts of power to include more diverse players. Intentional strategies to promote openness, ensure equity, diversify leadership and measure success can work. I’ve seen it happen. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/3nImg8vPUe4?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">In 1995, Anita Borg called for a “moonshot” effort to achieve gender equality in the tech field by 2020.</span></figcaption>
</figure>
<h2>Swimming upstream</h2>
<p>I loved math as a kid. I loved finding elegant solutions to abstract problems. I loved learning that <a href="https://en.wikipedia.org/wiki/M%C3%B6bius_strip">Mobius strips</a> have only one side and that there is <a href="https://en.wikipedia.org/wiki/Cantor%27s_first_set_theory_article">more than one size of infinity</a>. I was a math major in college and eventually found a home in computer science in graduate school. </p>
<p>But as a professional, I’ve seen that tech is skewed by currents that carry men to success and hold women back. In academic computer science departments, women are usually a small minority. </p>
<p>In most organizations I have dealt with, women rarely occupy the top job. From 2001 to 2009, I led a National Science Foundation supercomputer center. Ten years after moving on from that job, I’m still the only woman to have occupied that position. </p>
<p>Several years into my term, I discovered that I was paid one-third less than others with similar positions. Successfully lobbying for pay equity with my peers took almost a year and a sincere threat to step down from a job I loved. In the work world, money implies value, and no one wants to be paid less than their peers.</p>
<h2>Changing culture takes persistence</h2>
<p>Culture impacts outcomes. During my term as a supercomputer center head, each center needed to procure the biggest, baddest machine in order to get the bragging rights – and resources – necessary to continue. Supercomputer culture in those days was hypercompetitive and focused on dominance of Supercomputing’s <a href="https://www.top500.org/">Top500 ranking</a>.</p>
<figure class="align-center ">
<img alt="A climate-controlled room containing banks of computer processors" src="https://images.theconversation.com/files/354737/original/file-20200825-20-890vmb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/354737/original/file-20200825-20-890vmb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=355&fit=crop&dpr=1 600w, https://images.theconversation.com/files/354737/original/file-20200825-20-890vmb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=355&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/354737/original/file-20200825-20-890vmb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=355&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/354737/original/file-20200825-20-890vmb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=446&fit=crop&dpr=1 754w, https://images.theconversation.com/files/354737/original/file-20200825-20-890vmb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=446&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/354737/original/file-20200825-20-890vmb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=446&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Supercomputer centers typically involve lots of hardware like these banks of computer processors.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/nasa_goddard/6559334541/">NASA Goddard Space Flight Center/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<p>In this environment, women in leadership were unusual and there was more for women to prove, and quickly, if we wanted to get something done. The field’s focus on dominance was reflected in organizational culture. </p>
<p>My team and I set out to change that. Our efforts to include a broader range of styles and skill sets ultimately changed the composition of our center’s leadership and management. Improving the organizational culture also translated into a richer set of projects and collaborations. It helped us expand our focus to infrastructure and users and embrace the data revolution early on. </p>
<p>[<em><a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=experts">Expertise in your inbox. Sign up for The Conversation’s newsletter and get expert takes on today’s news, every day.</a></em>]</p>
<h2>Setting the stage for cultural diversity</h2>
<p>Diverse leadership is a critical part of creating diverse cultures. Women are <a href="https://hbr.org/2013/09/women-rising-the-unseen-barriers">more likely to thrive</a> in environments where they have not only stature, but responsibility, resources, influence, opportunity and power. </p>
<p>I’ve seen this firsthand as a co-founder of the <a href="https://www.rd-alliance.org/">Research Data Alliance (RDA)</a>, an international community organization of more than 10,000 members that has developed and deployed infrastructure to facilitate data sharing and data-driven research. From the beginning, gender balance has been a major priority for RDA, and as we grew, a reality in all leadership groups in the organization. </p>
<p>RDA’s plenaries also provide a model for diverse organizational meetings in which speaker lineups are expected to include both women and men, and all-male panels, nicknamed “manels,” are strongly discouraged. Women both lead and thrive in this community.</p>
<p>Having women at the table makes a difference. As a board member of the <a href="https://sloan.org/">Alfred P. Sloan Foundation</a>, I’ve seen the organization improve the diversity of annual classes of fellows in the highly prestigious <a href="https://sloan.org/fellowships/">Sloan Research Fellows’</a> program. To date, 50 Nobel Prize winners and many professional award winners are former Sloan Research Fellows.</p>
<p>Since 2013, the accomplished community members Sloan has chosen for its Fellowship Selection Committees have been half or more women. During that time, the diversity of Sloan’s research fellowship applicant pool and awardees have increased, with no loss of quality.</p>
<h2>Calming cultural currents</h2>
<p>Culture change is a marathon, not a sprint, requiring constant vigilance, many small decisions, and often changes in who holds power. My experience as supercomputer center head, and with the Research Data Alliance, the Sloan Foundation and other groups has shown me that organizations can create positive and more diverse environments. Intentional strategies, prioritization and persistent commitment to cultural change can help turn the tide.</p>
<p>Some years ago, one of my best computer science students told me that she was not interested in a tech career because it was so hard for women to get ahead. Cultures that foster diversity can change perceptions of what jobs women can thrive in, and can attract, rather than repel, women to study and work in tech. </p>
<p>Calming the cultural currents that hold so many women back can move the tech field closer to Borg’s goal of equal representation in the future. It’s much better to be late than never.</p>
<p><em>The Sloan Foundation has provided funding to The Conversation US.</em></p><img src="https://counter.theconversation.com/content/144779/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Francine Berman is former Chair of the Board of Trustees of the Anita Borg Institute. She currently serves as a member of the Alfred P. Sloan Foundation Board of Trustees. She is funded by the National Science Foundation.</span></em></p>
Diversifying leadership can change organizational cultures, which removes barriers to women in the tech industry and academia.
Francine Berman, Hamilton Distinguished Professor of Computer Science, Rensselaer Polytechnic Institute
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/141047
2020-06-29T20:06:08Z
2020-06-29T20:06:08Z
How Australia’s supercomputers crunched the numbers to guide our bushfire and pandemic response
<figure><img src="https://images.theconversation.com/files/343094/original/file-20200622-75483-1hrmnkj.jpg?ixlib=rb-1.1.0&rect=12%2C2%2C1617%2C1219&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">NCI Australia</span>, <span class="license">Author provided</span></span></figcaption></figure><p>As 2020 began, Australia was stunned by the worst bushfires on record. Six months later we are weathering the coronavirus pandemic sweeping the globe.</p>
<p>This year, perhaps more than ever before, decision-makers, emergency services, health providers and threatened communities have needed fast, reliable information to understand what’s happening. And beyond that, they have needed high-powered modelling to get a sense of what is yet to come.</p>
<p>That’s where supercomputers come in. Australia’s high-performance research computing infrastructure is led by two centres: the <a href="https://nci.org.au/">National Computational Infrastructure</a> (NCI Australia) in Canberra and the <a href="https://pawsey.org.au/">Pawsey Supercomputing Centre</a> in Perth. </p>
<p>NCI Australia is home to Gadi, the most powerful supercomputer in the southern hemisphere, which can do in an hour what would take your average desktop PC around 35 years running flat out. The Pawsey centre hosts the Nimbus cloud, which is specially designed for data-intensive research work in cutting-edge fields such as space science.</p>
<p>Both centres operate around the clock every day of the year. Even without a crisis, they process unimaginable quantities of data to deliver analysis and forecasts for decision-makers across the nation. To take one example, NCI’s routine work for <a href="http://www.ga.gov.au/about/projects/geographic/digital-earth-australia">Digital Earth Australia</a> helps to identify soil and coastal erosion, crop growth, water quality and changes to cities and regions. </p>
<h2>Supercomputing behind the scenes</h2>
<p>By their nature, high-performance research computers operate mostly behind the scenes. They provide infrastructure that is less visible but no less important than a ship or a telescope, and the expertise to help researchers use it.</p>
<p>When Australian government agencies need to make decisions to respond to a crisis like the bushfires or COVID-19, they draw on <a href="https://www.cawcr.gov.au/research/access/">decades of Australian and international research</a> backed by high-performance computing and data infrastructure. </p>
<p>Last summer, satellite images shocked the world with detailed and strangely beautiful views of <a href="https://www.sciencealert.com/stunning-images-from-space-reveal-the-extent-of-australia-s-bushfire-crisis">swirls of bushfire smoke</a> the size of global weather patterns. Our Kiwi colleagues woke to apocalyptic skies, tipped off beforehand by Australian and NZ collaborations with Japan’s Himawari-8 and -9 weather satellite mission. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/344487/original/file-20200629-155308-wiyjdv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/344487/original/file-20200629-155308-wiyjdv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/344487/original/file-20200629-155308-wiyjdv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/344487/original/file-20200629-155308-wiyjdv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/344487/original/file-20200629-155308-wiyjdv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/344487/original/file-20200629-155308-wiyjdv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/344487/original/file-20200629-155308-wiyjdv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/344487/original/file-20200629-155308-wiyjdv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Sentinel Earth-observing satellites provided a rich stream of data about the Australian bushfires in close to real time.</span>
<span class="attribution"><span class="source">ESA / Copernicus</span>, <a class="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA</a></span>
</figcaption>
</figure>
<p>Research satellites like the European <a href="https://copernicus.nci.org.au/sara.client/#/home">Sentinel-3</a> with a wider global view continued to track the plume as it circled the planet. NCI Australia hosts a <a href="http://www.copernicus.gov.au">regional data hub</a> to support Europe’s Copernicus Earth observation program.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/australias-bushfire-smoke-is-lapping-the-globe-and-the-law-is-too-lame-to-catch-it-130010">Australia's bushfire smoke is lapping the globe, and the law is too lame to catch it</a>
</strong>
</em>
</p>
<hr>
<p>Data-driven models running on supercomputers can provide earlier and more accurate warning of firestorms, floods, hailstorms, cyclones and other extremes. Better warnings give emergency services crucial hours that save lives and property. </p>
<p>Both national facilities are <a href="https://nci.org.au/news-events/news/leaders-australian-computing-research-begin-battle-covid-19">contributing resources</a> to support researchers in Australia in the fight against COVID-19. </p>
<p>With the Gadi supercomputer, NCI is providing the equivalent of more than 4,500 years of computer time to support three research groups. Pawsey is providing access to more than 1,100 desktop’s worth of computing power on the newly deployed <a href="https://pawsey.org.au/covid19-accelerated-access/">Nimbus cloud</a> for researchers across five projects.</p>
<h2>National infrastructure working at scale</h2>
<p>The Australian Government’s National Collaborative Research Infrastructure Strategy (<a href="https://www.education.gov.au/national-collaborative-research-infrastructure-strategy-ncris">NCRIS</a>) has invested A$70 million in each centre for upgrades to ensure the facilites can keep up with Australian research across all scientific domains. NCI’s Gadi supercomputer is about nine times more powerful than its predecessor, while the first phase of Pawsey’s upgrade has already delivered ten times more cloud storage and boosted network capabilities fivefold.</p>
<p>Reliable, collaborative facilities like NCI and Pawsey are essential to develop and improve immensely complex global and local models and prediction systems used by national and state governments.</p>
<p>The NCI and Pawsey systems support much more than climate and weather data and pandemic modelling. Other projects support gene sequencing, population mapping, transmission and containment modelling, and global economic predictions.</p>
<h2>Scale, collaboration and speed</h2>
<p>Scaling up our research computing capacity is important to meet the challenges of ever-growing amounts of research data. Collaboration makes it possible to access the best expertise. Speed is essential to meet the urgent demands of decision makers.</p>
<p>Supercomputers connected to massive data systems and supported by expert staff can yield crucial insights at scale, quickly enough to help our agencies identify and respond to crises. Faster processing also means researchers can identify and model trends that would otherwise go unnoticed, but which require early intervention. </p>
<p>Supercharging the relevant science can deliver real economic, environmental and public health outcomes. The need for informed crisis response does not look like it will go away any time soon.</p><img src="https://counter.theconversation.com/content/141047/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sean Smith is Director at NCI Australia, the federally funded national supercomputing facility hosted at The Australian National University.
He is a serial recipient of Australian Research Council research funding in his field of computational nanomaterials science and technology.
He is elected Fellow of the American Association for the Advancement of Science (Fellow, AAAS).</span></em></p><p class="fine-print"><em><span>Mark Stickells is the Executive Director of the Pawsey Supercomputing Centre, an unincorporated joint venture between CSIRO, Curtin University, Edith Cowan University, Murdoch University and The University of Western Australia. Pawsey receives operational and capital funding from WA and Australian governments.</span></em></p>
Supercomputers in Canberra and Perth power the analysis and modelling that decision-makers rely on in national crises.
Sean Smith, Professor and Director, NCI Australia, Australian National University
Mark Stickells, Executive Director, Pawsey Supercomputing Centre, CSIRO
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/139539
2020-06-03T19:08:30Z
2020-06-03T19:08:30Z
Scientists tap the world’s most powerful computers in the race to understand and stop the coronavirus
<figure><img src="https://images.theconversation.com/files/338637/original/file-20200529-78885-sw5rbw.png?ixlib=rb-1.1.0&rect=0%2C0%2C1373%2C883&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It takes a tremendous amount of computing power to simulate all the components and behaviors of viruses and cells.</span> <span class="attribution"><span class="source">Copyright: Thomas Splettstoesser scistyle.com</span></span></figcaption></figure><p>In “The Hitchhiker’s Guide to the Galaxy” by Douglas Adams, the haughty supercomputer Deep Thought is asked whether he can find the answer to the ultimate question concerning life, the universe and everything. He replies that, yes, he can do it, but it’s tricky and he’ll have to think about it. When asked how long it will take him he replies, “Seven-and-a-half million years. I told you I’d have to think about it.” </p>
<p>Real-life supercomputers are being asked somewhat less expansive questions but tricky ones nonetheless: how to tackle the COVID-19 pandemic. They’re being used in <a href="https://covid19-hpc-consortium.org/projects">many facets of responding to the disease</a>, including to predict the spread of the virus, to optimize contact tracing, to allocate resources and provide decisions for physicians, to design vaccines and rapid testing tools and to understand sneezes. And the answers are needed in a rather shorter time frame than Deep Thought was proposing.</p>
<p>The largest number of COVID-19 supercomputing projects involves designing drugs. It’s likely to take several effective drugs to treat the disease. Supercomputers allow researchers to take a rational approach and aim to selectively <a href="https://www.nejm.org/doi/10.1056/NEJMcibr2007042">muzzle proteins</a> that SARS-CoV-2, the virus that causes COVID-19, needs for its life cycle.</p>
<p>The viral genome encodes proteins needed by the virus to infect humans and to replicate. Among these are the infamous spike protein that sniffs out and penetrates <a href="https://theconversation.com/what-is-the-ace2-receptor-how-is-it-connected-to-coronavirus-and-why-might-it-be-key-to-treating-covid-19-the-experts-explain-136928">its human cellular target</a>, but there are also enzymes and molecular machines that the virus forces its human subjects to produce for it. Finding drugs that can bind to these proteins and stop them from working is a logical way to go.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/338967/original/file-20200601-95054-1mvqwwo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/338967/original/file-20200601-95054-1mvqwwo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=337&fit=crop&dpr=1 600w, https://images.theconversation.com/files/338967/original/file-20200601-95054-1mvqwwo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=337&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/338967/original/file-20200601-95054-1mvqwwo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=337&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/338967/original/file-20200601-95054-1mvqwwo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/338967/original/file-20200601-95054-1mvqwwo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/338967/original/file-20200601-95054-1mvqwwo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The Summit supercomputer at Oak Ridge National Laboratory has a peak performance of 200,000 trillion calculations per second – equivalent to about a million laptops.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/olcf/27790972307/in/album-72157697679727475/">Oak Ridge National Laboratory, U.S. Dept. of Energy</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>I am <a href="https://scholar.google.com/citations?user=Htjp4VEAAAAJ&hl=en">a molecular biophysicist</a>. My lab, at the <a href="https://cmb.ornl.gov/">Center for Molecular Biophysics</a> at the University of Tennessee and Oak Ridge National Laboratory, uses a supercomputer to discover drugs. We build three-dimensional virtual models of biological molecules like the proteins used by cells and viruses, and simulate how various chemical compounds interact with those proteins. We test thousands of compounds to find the ones that “dock” with a target protein. Those compounds that fit, lock-and-key style, with the protein are potential therapies.</p>
<p>The top-ranked candidates are then tested experimentally to see if they indeed do bind to their targets and, in the case of COVID-19, stop the virus from infecting human cells. The compounds are first tested in cells, then animals, and finally humans. Computational drug discovery with high-performance computing has been important in finding antiviral drugs in the past, such as the <a href="https://dx.doi.org/10.1208%2Fs12248-014-9604-9">anti-HIV drugs that revolutionized AIDS treatment</a> in the 1990s.</p>
<h2>World’s most powerful computer</h2>
<p>Since the 1990s the power of supercomputers has increased by a factor of a million or so. <a href="https://www.olcf.ornl.gov/summit/">Summit</a> at Oak Ridge National Laboratory is presently the world’s most powerful supercomputer, and has the combined power of roughly <a href="https://www.popsci.com/summit-supercomputer/">a million laptops</a>. A laptop today has roughly the same power as a supercomputer had 20-30 years ago.</p>
<p>However, in order to gin up speed, <a href="https://www.explainthatstuff.com/how-supercomputers-work.html">supercomputer architectures</a> have become more complicated. They used to consist of single, very powerful chips on which programs would simply run faster. Now they consist of thousands of processors performing massively parallel processing in which many calculations, such as testing the potential of drugs to dock with a pathogen or cell’s proteins, are performed at the same time. Persuading those processors to work together harmoniously is a pain in the neck but means we can quickly try out a lot of chemicals virtually. </p>
<p>Further, researchers use supercomputers to <a href="https://doi.org/10.1016/j.bpj.2018.02.038">figure out by simulation the different shapes</a> formed by the target binding sites and then virtually dock compounds to each shape. In my lab, that procedure has produced experimentally validated hits – chemicals that work – for each of 16 protein targets that physician-scientists and biochemists have discovered over the past few years. These targets were selected because finding compounds that dock with them could result in drugs for treating different diseases, including chronic kidney disease, prostate cancer, osteoporosis, diabetes, thrombosis and bacterial infections. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/338640/original/file-20200529-78885-8lfqmw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/338640/original/file-20200529-78885-8lfqmw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/338640/original/file-20200529-78885-8lfqmw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/338640/original/file-20200529-78885-8lfqmw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/338640/original/file-20200529-78885-8lfqmw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/338640/original/file-20200529-78885-8lfqmw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/338640/original/file-20200529-78885-8lfqmw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Scientists are using supercomputers to find ways to disabled the various proteins – including the infamous spike protein (green protrusions) – produced by SARS-CoV-2, the virus responsible for COVID-19.</span>
<span class="attribution"><span class="source">Copyright: Thomas Splettstoesser scistyle.com</span></span>
</figcaption>
</figure>
<h2>Billions of possibilities</h2>
<p>So which chemicals are being tested for COVID-19? A first approach is trying out <a href="https://theconversation.com/we-found-and-tested-47-old-drugs-that-might-treat-the-coronavirus-results-show-promising-leads-and-a-whole-new-way-to-fight-covid-19-136789">drugs that already exist for other indications</a> and that we have a pretty good idea are reasonably safe. That’s called “repurposing,” and if it works, regulatory approval will be quick.</p>
<p>But repurposing isn’t necessarily being done in the most rational way. One idea researchers are considering is that drugs that work against protein targets of some other virus, such as the flu, hepatitis or Ebola, will automatically work against COVID-19, even when the SARS-CoV-2 protein targets don’t have the same shape.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/339594/original/file-20200603-130940-nl9ubn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/339594/original/file-20200603-130940-nl9ubn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=703&fit=crop&dpr=1 600w, https://images.theconversation.com/files/339594/original/file-20200603-130940-nl9ubn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=703&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/339594/original/file-20200603-130940-nl9ubn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=703&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/339594/original/file-20200603-130940-nl9ubn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=884&fit=crop&dpr=1 754w, https://images.theconversation.com/files/339594/original/file-20200603-130940-nl9ubn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=884&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/339594/original/file-20200603-130940-nl9ubn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=884&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">ACE2 acts as the docking receptor for the SARS-CoV-2 virus’s spike protein and allows the virus to infect the cell.</span>
<span class="attribution"><span class="source">The Conversation</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>The best approach is to check if repurposed compounds will actually bind to their intended target. To that end, my lab published a preliminary report of a supercomputer-driven <a href="https://doi.org/10.26434/chemrxiv.11871402">docking study of a repurposing compound database</a> in mid-February. The study ranked 8,000 compounds in order of how well they bind to the viral spike protein. This paper triggered the establishment of a <a href="https://covid19-hpc-consortium.org/">high-performance computing consortium</a> against our viral enemy, announced by President Trump in March. Several of our top-ranked compounds are now in clinical trials.</p>
<p>Our own work has now expanded to about <a href="https://coronavirus-hpc.ornl.gov">10 targets on SARS-CoV-2</a>, and we’re also looking at human protein targets for disrupting the virus’s attack on human cells. Top-ranked compounds from our calculations are being tested experimentally for activity against the live virus. Several of these have already been found to be active.</p>
<p>Also, we and others are venturing out into the wild world of new drug discovery for COVID-19 – looking for compounds that have never been tried as drugs before. Databases of billions of these compounds exist, all of which could probably be synthesized in principle but most of which have never been made. Billion-compound docking is a tailor-made task for massively parallel supercomputing.</p>
<h2>Dawn of the exascale era</h2>
<p>Work will be helped by the arrival of the next big machine at Oak Ridge, called <a href="https://www.olcf.ornl.gov/frontier/">Frontier</a>, planned for next year. Frontier should be about 10 times more powerful than Summit. Frontier will herald the “exascale” supercomputing era, meaning machines capable of 1,000,000,000,000,000,000 calculations per second.</p>
<p>Although some fear supercomputers <a href="https://en.wikipedia.org/wiki/AI_takeover">will take over the world</a>, for the time being, at least, they are humanity’s servants, which means that they do what we tell them to. Different scientists have different ideas about how to calculate which drugs work best – some prefer artificial intelligence, for example – so there’s quite a lot of arguing going on. </p>
<p>Hopefully, scientists armed with the most powerful computers in the world will, sooner rather than later, find the drugs needed to tackle COVID-19. If they do, then their answers will be of more immediate benefit, if less philosophically tantalizing, than the answer to the ultimate question provided by Deep Thought, which was, maddeningly, simply <a href="https://hitchhikers.fandom.com/wiki/42">42</a>.</p>
<p>[<em>Get our best science, health and technology stories.</em> <a href="https://theconversation.com/us/newsletters/science-editors-picks-71/?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=science-best">Sign up for The Conversation’s science newsletter</a>.]</p><img src="https://counter.theconversation.com/content/139539/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jeremy Smith receives funding from Berg Health, the NIH and the DOE.</span></em></p>
Scanning through billions of chemicals to find a few potential drugs for treating COVID-19 requires computers that harness together thousands of processors.
Jeremy Smith, Governor's Chair, Biophysics, University of Tennessee
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/127309
2020-01-20T17:44:22Z
2020-01-20T17:44:22Z
Google claims to have invented a quantum computer, but IBM begs to differ
<figure><img src="https://images.theconversation.com/files/304839/original/file-20191203-67002-chsvk1.jpg?ixlib=rb-1.1.0&rect=7%2C23%2C5114%2C3002&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Quantum computing would signify an immense shift in processing power, but how close are we to achieving it?</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>On Oct. 23, 2019, Google published a paper in the journal <em>Nature</em> entitled “<a href="https://doi.org/10.5061/dryad.k6t1rj8">Quantum supremacy using a programmable superconducting processor</a>.” The tech giant announced its achievement of a much vaunted goal: quantum supremacy. </p>
<p>This perhaps ill-chosen term (<a href="https://www.quantamagazine.org/john-preskill-explains-quantum-supremacy-20191002/">coined by physicist John Preskill</a>) is meant to convey the huge speedup that processors based on quantum-mechanical systems are predicted to exhibit, relative to even the fastest classical computers.</p>
<p>Google’s benchmark was achieved on a new type of quantum processor, code-named Sycamore, consisting of 54 independently addressable superconducting junction devices (of which only 53 were working for the demonstration). </p>
<p>Each of these devices allows the storage of one bit of quantum information. In contrast to the bits in a classical computer, which can only store one of two states (0 or 1 in the digital language of binary code), a quantum bit – qbit — can store information in a coherent superposition state which can be considered to contain fractional amounts of both 0 and 1. </p>
<p>Sycamore uses technology developed by the <a href="https://web.physics.ucsb.edu/%7Emartinisgroup/">superconductivity research group of physicist John Martinis at the University of California, Santa Barbara</a>. The entire Sycamore system must be kept cold at cryogenic temperatures using special helium dilution refrigeration technology. Because of the immense challenge involved in keeping such a large system near the absolute zero of temperature, it is a technological tour de force. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Researchers at Google have been working on a quantum computer, which would revolutionize the industry.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Contentious findings</h2>
<p>The Google researchers demonstrated that the performance of their quantum processor in sampling the output of a pseudo-random quantum circuit was vastly better than a classical computer chip — like the kind in our laptops — could achieve. Just how vastly became a point of contention, and the story was not without intrigue. </p>
<p>An inadvertent leak of the Google group’s paper on the NASA Technical Reports Server (NTRS) occurred a month prior to publication, during the blackout period when <em>Nature</em> prohibits discussion by the authors regarding as-yet-unpublished papers. The lapse was momentary, but long enough that <a href="https://www.ft.com/content/b9bb4e54-dbc1-11e9-8f9b-77216ebe1f17"><em>The Financial Times</em></a>, <a href="https://www.theverge.com/2019/9/23/20879485/google-quantum-supremacy-qubits-nasa"><em>The Verge</em></a> and other outlets picked up the story. </p>
<p>A well-known quantum computing blog by computer scientist Scott Aaronson contained some <a href="https://www.scottaaronson.com/blog/?p=4317">oblique references to the leak</a>. The reason for this obliqueness became clear when the paper was finally published online and Aaronson could at last reveal himself to be one of the reviewers.</p>
<h2>Challenges to Google’s story</h2>
<p>The story had a further controversial twist when the Google group’s claims were immediately countered by IBM’s quantum computing group. IBM shared <a href="https://arxiv.org/abs/1910.09534">a preprint posted on the ArXiv</a> (an online repository for academic papers that have yet to go through peer review) and <a href="https://www.ibm.com/blogs/research/2019/10/on-quantum-supremacy/">a blog post dated Oct. 21, 2019</a> (note the date!). </p>
<p>While the Google group had claimed that a classical (super)computer would require 10,000 years to simulate the same 53-qbit random quantum circuit sampling task that their Sycamore processor could do in 200 seconds, the IBM researchers showed a method that could reduce the classical computation time to a mere matter of days. </p>
<p>However, the IBM classical computation would have to be carried out on the world’s fastest supercomputer — the IBM-developed Summit OLCF-4 at Oak Ridge National Labs in Tennessee — with clever use of secondary storage to achieve this benchmark.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=480&fit=crop&dpr=1 600w, https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=480&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=480&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=603&fit=crop&dpr=1 754w, https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=603&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=603&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Summit OLCF-4 supercomputer was developed by IBM for use at Oak Ridge National Laboratory.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/olcf/42957291821/in/photolist-NsW4ML-25mPCpZ-JkN2vk-28rZmfr-YYYjk1-282ZTzq-271XTpf-271XZao-26JSfsB-25mPBPa-287nqxR-FENxmy-22HVvNY-227b4AU-XgBEPE-W6iPRi-XZZrnP-28rxs9o-XqcFKR-28rZmpK-H4EmiH-27ZDEwH-26JSngB-279g4ti-25moRES-28vVuuM">Carlos Jones/ORNL</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>While of great interest to researchers like myself working on hardware technologies related to quantum information, and important in terms of establishing academic bragging rights, the IBM-versus-Google aspect of the story is probably less relevant to the general public interested in all things quantum. </p>
<p>For the average citizen, the mere fact that a 53-qbit device could beat the world’s fastest supercomputer (containing more than 10,000 multi-core processors) is undoubtedly impressive. Now we must try to imagine what may come next.</p>
<h2>Quantum futures</h2>
<p>The reality of quantum computing today is that very impressive strides have been made on the hardware front. A wide array of credible quantum computing hardware platforms now exist, including <a href="https://ionq.com">ion traps</a>, <a href="https://quantuminstitute.yale.edu/publications/what-makes-great-qubit-diamonds-and-ions-could-hold-answer">superconducting device arrays</a> similar to those in Google’s Sycamore system and <a href="https://www.aps.org/publications/apsnews/200805/diamond.cfm">isolated electrons trapped in NV-centres in diamond</a>. </p>
<p>These and other systems are all now in play, each with benefits and drawbacks. So far researchers and engineers have been making steady technological progress in developing these different hardware platforms for quantum computing.</p>
<p>What has lagged quite a bit behind are custom-designed algorithms (computer programs) designed to run on quantum computers and able to take full advantage of possible quantum speed-ups. While several notable quantum algorithms exist — <a href="https://www.scottaaronson.com/blog/?p=208">Shor’s algorithm for factorization</a>, for example, which has applications in cryptography, and <a href="https://www.cs.cmu.edu/%7Eodonnell/quantum15/lecture04.pdf">Grover’s algorithm</a>, which might prove useful in database search applications — the total set of quantum algorithms remains rather small. </p>
<p>Much of the early interest (and funding) in quantum computing was spurred by the possibility of quantum-enabled advances in cryptography and code-breaking. A huge number of online interactions ranging from confidential communications to financial transactions require secure and encrypted messages, and modern cryptography relies on the difficulty of factoring large numbers to achieve this encryption. </p>
<p>Quantum computing could be very disruptive in this space, as Shor’s algorithm could make code-breaking much faster, while quantum-based encryption methods would allow detection of any eavesdroppers. </p>
<p>The interest various agencies have in unbreakable codes for secure military and financial communications has been a major driver of research in quantum computing. It is worth noting that all these code-making and code-breaking applications of quantum computing ignore to some extent the fact that no system is perfectly secure; there will always be a backdoor, because there will always be a non-quantum human element that can be compromised.</p>
<h2>Quantum applications</h2>
<p>More appealing for the non-espionage and non-hacker communities — in other words, the rest of us — are the possible applications of quantum computation to solve very difficult problems that are effectively unsolvable using classical computers. </p>
<p>Ironically, many of these problems emerge when we try to use classical computers to solve quantum-mechanical problems, <a href="https://www.dwavesys.com/media-coverage/ieee-spectrum-vw-solves-quantum-chemistry-problems-d-wave-machine">such as quantum chemistry problems that could be relevant for drug design</a> and various challenges in condensed matter physics including a number related to high-temperature superconductivity. </p>
<p>So where are we in the wonderful and wild world of quantum computation? </p>
<p>In recent years, we have had many convincing demonstrations that qbits can be created, stored, manipulated and read using a number of futuristic-sounding quantum hardware platforms. But the algorithms lag. So while the prospect of quantum computing is fascinating, it will likely be a long time before we have quantum equivalents of the silicon chips that power our versatile modern computing devices. </p>
<p>[ <em>Deep knowledge, daily.</em> <a href="https://theconversation.com/ca/newsletters?utm_source=TCCA&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/127309/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael Bradley receives funding from NSERC, for research on plasma processing techniques for new quantum materials, with applications in quantum information.</span></em></p>
A paper published by researchers at Google claimed that they had achieved computing quantum supremacy, but leaks and counter-claims have created a stir.
Michael Bradley, Professor of Physics & Engineering Physics, University of Saskatchewan
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/127255
2019-11-19T18:22:53Z
2019-11-19T18:22:53Z
Quantum computing, the new frontier of finance
<figure><img src="https://images.theconversation.com/files/302156/original/file-20191118-66953-pqj5x8.jpg?ixlib=rb-1.1.0&rect=0%2C111%2C1526%2C1055&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Close-up on the circuitry of the Vesuvius quantum computer, announced in 2012 by the Canadian firm D-Wave Systems.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/jurvetson/39188582795">Steve Jurvetson/Flickr</a></span></figcaption></figure><p>The evolution of modern finance was closely linked to the evolution of computers, communications, and financial mathematics. Two main changes happened in the 1970s with the beginning of derivative trading and after the crisis of 2007 with the massive introduction of fintech.</p>
<p>Derivatives pricing started with the celebrated <a href="https://www.investopedia.com/terms/b/blackscholes.asp">Black and Scholes equation</a> and formulas in 1974, followed by a wealth of mathematical methods to compute the prices of derivatives. Still, even the 1980s derivative pricing required supercomputers, giving big firms a major competitive advantage – before the 2007 crisis, the trading volume was close to 1 trillion dollars a day. The prevailing opinion was that derivatives had enabled us to complete financial markets so that any stream of cash flows could be engineered.</p>
<p>This belief was shattered by the <a href="https://www.nytimes.com/2018/09/12/upshot/financial-crisis-recession-recovery.html">2007 financial crisis</a>, which showed that hedging can be perfect only as long as counterparties stay solvent. With the <a href="https://www.nytimes.com/2018/09/17/opinion/lehman-brothers-financial-crisis.html">failure of Lehman Brothers</a>, the world of finance became painfully understood that there is risk in derivatives and that free markets are not self-regulating. To save them, central banks injected trillions of dollars, euros and yens in liquidity through <a href="https://www.investopedia.com/terms/q/quantitative-easing.asp">quantitative easing</a> (QE). In the United States, the Fed injected some 4.5 trillion dollars in liquidity, roughly <a href="https://fred.stlouisfed.org/">one-third of the total monetary mass</a>.</p>
<h2>Understanding clients and mitigating problems</h2>
<p>After the crisis, the financial world turned its attention to understanding clients and to mitigate the problems created by market manipulations made possible by automated trading. Fintech uses computer-based techniques to model client behaviour, to automate dealing with clients and to plan and execute trades. At the same time, a number of <a href="https://www.investopedia.com/terms/f/flash-crash.asp">“flash crashes”</a> – sudden but short-lived large drops in market value – has heightened the attention of major players to the risk of crowding of algorithms.</p>
<p>A major new change is now in sight through the possible implementation of quantum computers. Instead of binary bits – the classic elementary unit of information – quantum computing uses <a href="https://en.wikipedia.org/wiki/Qubit">qubits</a> (quantum bits), obtained by the superposition of binary states. This would allow them to process a much larger amount of information thousands of times faster than classical computers. </p>
<p>It was generally believed that quantum computing was far in the future, but Google has recently announced to have actually reached this goal. First, the <a href="https://www.ft.com/content/b9bb4e54-dbc1-11e9-8f9b-77216ebe1f17"><em>Financial Times</em> reported</a> that Google had posted a paper on the NASA website announcing that its quantum computer called Sycamore has been able to perform in three minutes a computation that would take 10,000 years to perform on classical supercomputers. The paper was later removed from the website, but Google confirmed the announcement with an <a href="https://www.nature.com/articles/d41586-019-03224-w">October 23 paper in <em>Nature</em></a> and invited scientists and journalists to <a href="https://www.nytimes.com/2019/10/30/opinion/google-quantum-computer-sycamore.html">watch the computation</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/ZCzSnFOV6Pw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Google claims “quantum computer supremacy” with new processor (ABC News).</span></figcaption>
</figure>
<h2>Quantum leaps</h2>
<p>Why is it so important to reach <a href="https://en.wikipedia.org/wiki/Quantum_supremacy">quantum supremacy</a>? Modern economies are shaped by complex computations. Supercomputers are used to design products such as cars and planes, invent new drugs, create electronic circuits, model economies, organise large-scale logistics and study the climate. Unfortunately, computations also allow us to build lethal weapons and, increasingly, to monitor and attempt to control the behaviour of populations.</p>
<p>In the last 70 years, computing power has increased by a mind-boggling multiple. In the 1960s, even powerful computers were able to perform only a few MFLOPS (millions of floating point operations per second) while today the most powerful computer is able to perform almost <a href="https://www.theverge.com/circuitbreaker/2018/6/12/17453918/ibm-summit-worlds-fastest-supercomputer-america-department-of-energy">100 PetFLOPS</a> (10 raised to 17th power).</p>
<p>Even with such power, there are important computational tasks that are not solvable or only partially solvable today. The study of combustion and turbulence, the study of molecules from basic physical principles (quantum-mechanical simulation), engineering nuclear fusion and even logistic problems are some of the grand Challenges of computation as defined by federal <a href="http://www.hpcc.gov/">High Performance Computing and Communications</a> (HPCC) program. Solving these problems would give a firm or even a nation an important competitive advantage. There is, of course, also the sinister possibility of creating more destructive weapons.</p>
<p>What would be the importance of quantum supremacy for finance and economics? First, quantum computing would make current cryptographic techniques unsafe. Methods and algorithms will have to be changed. <a href="https://www.technologyreview.com/s/613946/explainer-what-is-post-quantum-cryptography/">Post-quantum cryptography</a>, or quantum-resistant cryptography, is a flourishing sector of study both in academia and with firms involved in cryptography. Some firms already offer products for post-quantum cryptography, which will be big business.</p>
<h2>Intuition, not brute force</h2>
<p>But probably the major changes would be in artificial intelligence (AI) and machine learning. The fact is that we do not know how human intuition and problem-solving works. Ultimately, computers solve problems with a brute-force approach by looking at different alternatives and choosing the best. The search space of quantum computers could be thousands of time larger than the search space considered by current computers. It would become feasible to synthesise a design from specifications and machines could become more “creative” through the ability to explore an immense range of possible design solutions. In the fields of finance and economics, quantum computing could lead to analysing a large space of heterogeneous data to make financial and predictions and understanding economic phenomena. </p>
<p>Amid such hope, caution is necessary: financial and economic data are truly complex, and analysis will not necessarily lead to more accurate predictions given the complexity of data. The complexity and non-stationarity of data might defy analysis. In other words, it is questionable if the use of quantum computing will reduce uncertainty.</p>
<p>The global effect of quantum computing on economic and social life will depend on the use that will be made of this tool – and that stems from human decisions rather than being forced by knowledge itself.</p><img src="https://counter.theconversation.com/content/127255/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'ont déclaré aucune autre affiliation que leur organisme de recherche.</span></em></p>
On October 23 Google announced that it built a quantum computer thousands of times faster than classic computers. This could have immense impacts on finance, cryptography and other fields.
Sergio Focardi, Enseignant-chercheur en Finance quantitative à l’ESILV et à l'EMLV, membre du De Vinci Research Center, Pôle Léonard de Vinci
Davide Mazza, Professor of Finance, Pôle Léonard de Vinci
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/120706
2019-07-24T16:37:47Z
2019-07-24T16:37:47Z
Neven’s Law: why it might be too soon for a Moore’s Law for quantum computers
<figure><img src="https://images.theconversation.com/files/285614/original/file-20190724-110149-1ipk2c9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A dilution refrigerator used to test quantum processor prototypes.</span> <span class="attribution"><span class="source">Agnese Abrusci</span>, <span class="license">Author provided</span></span></figcaption></figure><p>A new disruptive technology is on the horizon and it promises to take computing power to unprecedented and unimaginable heights. And to predict the speed of progress of this new “<a href="https://theconversation.com/explainer-quantum-computation-and-communication-technology-7892">quantum computing</a>” technology, the director of Google’s Quantum AI Labs, <a href="https://ai.google/research/people/HartmutNeven">Hartmut Neven</a>, has <a href="https://www.quantamagazine.org/does-nevens-law-describe-quantum-computings-rise-20190618/">proposed a new rule</a> similar to the Moore’s Law that has measured the progress of computers for more than 50 years.</p>
<p>But can we trust “Neven’s Law” as a true representation of what is happening in quantum computing and, most importantly, what is to come in the future? Or is it simply too early on in the race to come up with this type of judgement?</p>
<p>Unlike conventional computers that store data as electrical signals that can have one of two states (1 or 0), <a href="https://theconversation.com/how-we-created-the-first-ever-blueprint-for-a-real-quantum-computer-72290">quantum computers</a> can use many physical systems to store data, such as electrons and photons. These can be engineered to encode information in multiple states, which enables them to do calculations exponentially faster that traditional computers.</p>
<p>Quantum computing is still in its infancy, and no one has yet built a quantum computer that can outperform conventional supercomputers. But, despite some <a href="https://theconversation.com/hype-and-cash-are-muddying-public-understanding-of-quantum-computing-82647">scepticism</a>, there is widespread excitement about how fast progress is <a href="https://theconversation.com/ibm-launches-commercial-quantum-computing-were-not-ready-for-what-comes-next-110331">now being made</a>. As such, it would be helpful to have an idea of what we can expect from quantum computers in years to come. </p>
<p><a href="https://theconversation.com/moores-law-is-50-years-old-but-will-it-continue-44511">Moore’s Law</a> describes the way that the processing power of traditional digital computers has tended to double roughly every two years, creating what we call exponential growth. Named after Intel co-founder, Gordon Moore, the law more accurately describes the rate of increase in the number of transistors that can be integrated into a silicon microchip.</p>
<p>But quantum computers are designed in a very different way around the laws of <a href="https://theconversation.com/explainer-quantum-physics-570">quantum physics</a>. And so Moore’s Law does not apply. This is where Neven’s Law comes in. It states that quantum computing power is experiencing “doubly exponential growth relatively to conventional computing”.</p>
<p>Exponential growth means something grows by powers of two: 2¹ (2), 2² (4), 2³ (8), 2⁴ (16) and so on. Doubly exponential growth means something grows by powers of powers of two: 2² (4), 2⁴ (16), 2⁸ (256), 2¹⁶ (65,536) and so on. To put this into perspective, if traditional computers had seen doubly exponential growth under Moore’s Law (instead of singly exponential), we would have had today’s laptops and smartphones by 1975.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/285615/original/file-20190724-110162-ze17t7.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/285615/original/file-20190724-110162-ze17t7.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/285615/original/file-20190724-110162-ze17t7.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/285615/original/file-20190724-110162-ze17t7.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/285615/original/file-20190724-110162-ze17t7.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/285615/original/file-20190724-110162-ze17t7.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/285615/original/file-20190724-110162-ze17t7.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Quantum computers may be developing much faster than conventional ones.</span>
<span class="attribution"><span class="source">Agnese Abrusci</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>This enormously fast pace should soon lead, Neven hopes, to the so-called quantum advantage. This is a much-anticipated milestone where a relatively small quantum processor overtakes the most powerful conventional supercomputers.</p>
<p>The reason for this doubly exponential growth is based on an in-house observation. <a href="https://www.quantamagazine.org/does-nevens-law-describe-quantum-computings-rise-20190618">According to an interview with Neven,</a> Google scientists are getting better at decreasing the error rate of their quantum computer prototypes. This allows them to build more complex and more powerful systems with every iteration.</p>
<p>Neven maintains that this progress itself is exponential, much like Moore’s Law. But a quantum processor is inherently and exponentially better than a classical one of equal size. This is because it exploits a quantum effect called <a href="https://theconversation.com/einstein-vs-quantum-mechanics-and-why-hed-be-a-convert-today-27641">entanglement</a> that allows different computational tasks to be done at the same time, producing exponential speed ups.</p>
<p>So, simplistically, if quantum processors are developing at an exponential rate and they are exponentially faster than classical processors, quantum systems are developing at a doubly exponential rate in relation to their classical counterparts.</p>
<h2>A note of caution</h2>
<p>While this sounds exciting, we need to exercise some caution. For starters, Neven’s conclusion seems to be based on a handful of prototypes and progress measured over a relatively short timeframe (a year or less). So few data points could easily be made to fit many other patterns of extrapolated growth.</p>
<p>There is also a practical issue that, as quantum processors become increasingly complex and powerful, technical problems that are minor now could become much more important. For example, the presence of even modest electrical noise in a quantum system could lead to computational errors that become more and more frequent as the processor complexity grows.</p>
<p>This issue could be solved by implementing <a href="https://iopscience.iop.org/article/10.1088/0034-4885/76/7/076001/meta">error correction protocols</a>, but this would effectively mean adding lots of backup hardware to the processor that is otherwise redundant. So the computer would have to become much more complex without gaining much extra power, if any. This kind of problem could affect Neven’s prediction, but at the moment it’s just too soon to call.</p>
<p>Despite being just an empirical observation and not a fundamental law of nature, Moore’s Law foresaw the progress of conventional computing with remarkable accuracy for about 50 years. In some sense, it was more than just a prediction, as it <a href="https://www.cnet.com/news/moores-law-is-the-reason-why-your-iphone-is-so-thin-and-cheap/">stimulated the microchip industry</a> to adopt a consistent roadmap, develop regular milestones, assess investment volumes and evaluate prospective revenues. </p>
<p>If Neven’s observation proves to be as prophetic and self-fulling as Moore’s Law, it will certainly have ramifications well beyond the mere prediction of quantum computing performance. For one thing, at this stage, nobody knows whether quantum computers will become widely commercialised or remain the toys of specialised users. But if Neven’s Law holds true, it won’t be long until we find out.</p><img src="https://counter.theconversation.com/content/120706/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alessandro Rossi receives funding from the UKRI Industrial Strategy Challenge Fund through the Measurement Fellowship Scheme at the National Physical Laboratory. He also holds a Chancellor's Fellowship at the University of Strathclyde.</span></em></p><p class="fine-print"><em><span>Fernando Gonzalez-Zalba receives funding from the European Union’s Horizon 2020 Research and Innovation Programme
under grant agreement No 688539, the Royal Society Short Industry Fellowship Programme and the Winton Programme for the Physics of Sustainability. He is a Research Fellow at Hughes Hall, Cambridge, and a Senior Research Scientist at Hitachi Cambridge Laboratory.</span></em></p>
The head of Google’s Quantum AI Labs, Hartmut Neven, claims the current speed of development means a quantum computing breakthrough is near.
Alessandro Rossi, Chancellor's Fellow, Department of Physics, University of Strathclyde
M. Fernando Gonzalez-Zalba, Research Fellow, University of Cambridge
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/110331
2019-01-25T13:08:59Z
2019-01-25T13:08:59Z
IBM launches commercial quantum computing – we’re not ready for what comes next
<figure><img src="https://images.theconversation.com/files/255551/original/file-20190125-108364-1agoxld.jpg?ixlib=rb-1.1.0&rect=0%2C400%2C2700%2C1992&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">IBM's quantum computer, Q System One.</span> <span class="attribution"><span class="source">IBM</span></span></figcaption></figure><p>IBM <a href="https://newsroom.ibm.com/2019-01-08-IBM-Unveils-Worlds-First-Integrated-Quantum-Computing-System-for-Commercial-Use">recently unveiled</a> what it claimed was the world’s first commercial quantum computer. While the announcement of the Q System One wasn’t scientifically groundbreaking, the fact that IBM sees this as a commercial product that organisations (if not individuals) will want to use is an important breakthrough.</p>
<p>IBM has taken a prototype technology that has existed in the lab for over 20 years and launched it in the real world. In doing so, it marks an important step towards the next generation of computing technology becoming ubiquitous, something the world isn’t yet ready for. In fact, <a href="https://www.youtube.com/watch?v=jkcTHhqk_lE">quantum may well</a> prove to be the most disruptive technology of the information age.</p>
<p><a href="https://theconversation.com/get-used-to-it-quantum-computing-will-bring-immense-processing-possibilities-46420">Quantum computers</a> work by exploiting the weird phenomenon described by quantum physics, like the ability of an object to be, in a very real sense, in more than one place at the same time. Doing so enables them to solve problems in seconds that would take the age of the universe to solve on even the most powerful of today’s supercomputers. </p>
<h2>Too expensive?</h2>
<p>The <a href="https://www.ncsc.gov.uk/whitepaper/quantum-key-distribution">one criticism</a> typically laid against quantum technologies is that they are “too expensive”, and will continue to be so even as they become more readily available. This is certainly the case today. IBM isn’t making its quantum computer available to buy but rather to access over the internet. But this shows the technology is on its way to becoming affordable in the near future.</p>
<p>Quantum computers are very easily disrupted by changes in the environment and take a long time to reset. So IBM has developed a <a href="https://www.theverge.com/2019/1/8/18171732/ibm-quantum-computer-20-qubit-q-system-one-ces-2019">protective system</a> to keep the Q System One stable enough to perform tasks for commercial customers, which are <a href="https://newsroom.ibm.com/2019-01-08-ExxonMobil-and-Worlds-Leading-Research-Labs-Collaborate-with-IBM-to-Accelerate-Joint-Research-in-Quantum-Computing">likely to include</a> large companies, universities and research organisations that want to run complex simulations. As a result, IBM believes it has a commercially viable product, and is putting its money where its mouth is.</p>
<p>History shows us that technologies can experience rapid growth in use and capability once they become viable commercial products. After conventional digital computers became commercially viable, they experienced an exponential explosion referred to commonly as <a href="https://theconversation.com/moores-law-is-50-years-old-but-will-it-continue-44511">Moore’s Law</a>. Roughly every two years, computers have doubled in power while their size and costs have fallen by half. This “law” is really just a trend that has been made possible, in part, by market forces.</p>
<p>The IBM announcement does not guarantee that quantum computers will now experience Moore’s Law-style exponential growth of their own. It does, however, make that explosion likelier and sooner.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/LAA0-vjTaNY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>In the long run, this means better, more advanced technology overall, for all of us. Quantum measurement devices are <a href="https://phys.org/news/2018-11-probing-quantum-physics-macroscopic-scale.html">more accurate</a>. Quantum imaging devices can produce <a href="http://iopscience.iop.org/article/10.1088/2040-8978/18/7/073002">better pictures</a>. Quantum batteries can <a href="https://physicsworld.com/a/quantum-battery-could-get-a-boost-from-entanglement/">charge faster</a>. Quantum cybersecurity offers <a href="https://www.information-age.com/quantum-cryptography-123477496/">better protection</a>. And quantum computers can <a href="https://www.scottaaronson.com/blog/?p=208">solve problems</a> no classical computer could ever hope to. </p>
<p>These are just the tip of the iceberg. In the short to mid-term, however, this also means we have something of an approaching crisis.</p>
<h2>Skills crisis</h2>
<p>Quantum technologies are disruptive, and more so in cybersecurity than any other field. Once large-scale quantum computers become available (which at the current rate could take another ten to 15 years), they could be used to access pretty much every secret on the internet. Online banking, private emails, passwords and secure chats would all be opened up. You would be able to impersonate any person or web page online.</p>
<p>This is because the information locks we use to secure privacy and authentication online <a href="https://theconversation.com/will-superfast-quantum-computers-mean-the-end-of-unbreakable-encryption-64402">are like butter</a> to a quantum computer’s hot knife. Quantum technology is disruptive in many other areas as well. If your business decides not to “go quantum” and your competitor or adversary does, you may well be at a strong disadvantage. </p>
<p>As the technology landscape realigns itself, it is quite likely that many tech professionals will see their skills turn obsolete very quickly. Simultaneously, companies may find themselves frantic to hire expertise that does not readily exist.</p>
<p>When geopolitical and market forces realign, it’s common for people in business to say everyone now has to learn a new language. For example, as China has grown in power and influence, it is not uncommon in business communities to <a href="https://theconversation.com/boris-is-right-its-time-for-us-to-learn-chinese-19354">hear the phrase</a> “we’ll all have to learn Mandarin now”. Perhaps it’s time for all of us to start learning to speak quantum.</p><img src="https://counter.theconversation.com/content/110331/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Carlos Perez-Delgado is a consultant at QandI (<a href="http://www.qandi.co.uk">www.qandi.co.uk</a>). </span></em></p>
Quantum computers are set to revolutionise technology, but very few people know how to use them.
Carlos Perez-Delgado, Lecturer in Computing, University of Kent
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/109894
2019-01-23T09:15:06Z
2019-01-23T09:15:06Z
How did Uranus end up on its side? We’ve been finding out
<figure><img src="https://images.theconversation.com/files/254167/original/file-20190116-163286-178cy51.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Uranus seen in this false-color view from NASA's Hubble Space Telescope.</span> <span class="attribution"><span class="source">NASA</span></span></figcaption></figure><p>Uranus is arguably the most mysterious planet in the solar system – we know very little about it. So far, we have only visited the planet once, with the <a href="https://voyager.jpl.nasa.gov/galleries/images-voyager-took/uranus/">Voyager 2</a> spacecraft back in 1986. The most obvious odd thing about this ice giant is the fact that it is spinning on its side. </p>
<p>Unlike all the other planets, which spin roughly “upright” with their spin axes at close to right angles to their orbits around the sun, Uranus is tilted by almost a right angle. So in its summer, the north pole points almost directly towards the sun. And unlike Saturn, Jupiter and Neptune, which have horizontal sets of rings around them, Uranus has vertical rings and moons that orbit around its tilted equator. </p>
<p>The ice giant also has a surprisingly cold temperature and a messy and off-centre magnetic field, unlike the neat bar-magnet shape of most other planets like Earth or Jupiter. Scientists therefore suspect that Uranus was once similar to the other planets in the solar system but was suddenly flipped over. So what happened? Our new research, published in the <a href="http://iopscience.iop.org/article/10.3847/1538-4357/aac725/meta">Astrophysical Journal</a> and <a href="https://agu.confex.com/agu/fm18/meetingapp.cgi/Paper/386791">presented at a meeting</a> of the American Geophysical Union, offers a clue. </p>
<h2>Cataclysmic collision</h2>
<p>Our solar system used to be a much more violent place, with protoplanets (bodies developing to become planets) colliding in violent giant impacts that helped create the worlds we see today. Most researchers believe that Uranus’ spin <a href="http://adsabs.harvard.edu/abs/1991uran.book.....B">is the consequence of a dramatic collision</a>. We set out to uncover how it could have happened.</p>
<p>We wanted to study giant impacts on Uranus to see exactly how such a collision could have affected the planet’s evolution. Unfortunately, we can’t (yet) build two planets in a lab and smash them together to see what really happens. Instead, we ran computer models simulating the events using a powerful supercomputer as the next best thing.</p>
<p>The basic idea was to model the colliding planets with millions of particles in the computer, each representing a lump of planetary material. We give the simulation the equations that describe how physics like gravity and material pressure work, so it can calculate how the particles evolve with time as they crash into each other. This way we can study even the fantastically complicated and messy results of a giant impact. Another benefit of using computer simulations is that we have full control. We can test a wide variety of different impact scenarios and explore the range of possible outcomes.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/pM7Z5lF9TwE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Our simulations (see above) show that a body at least twice as massive as the Earth could readily create the strange spin Uranus has today by slamming into and merging with a young planet. For more grazing collisions, the impacting body’s material would probably end up spread out in a thin, hot shell near the edge of Uranus’ ice layer, underneath the hydrogen and helium atmosphere.</p>
<p>This could inhibit the mixing of material inside Uranus, trapping the heat from its formation deep inside. Excitingly, this idea seems to fit with the observation that Uranus’ exterior is so cold today. Thermal evolution is very complicated, but it is at least clear how a giant impact can reshape a planet both inside and out.</p>
<h2>Super computations</h2>
<p>The research is also exciting from a computational perspective. Much like the size of a telescope, the number of particles in a simulation limits what we can resolve and study. However, simply trying to use more particles to enable new discoveries is a serious computational challenge, meaning it takes a long time even on a powerful computer.</p>
<p>Our latest simulations use over 100m particles, about 100-1,000 times <a href="https://agu.confex.com/agu/fm18/meetingapp.cgi/Paper/386791">more than most</a> other studies today use. As well as making for some stunning pictures and animations of how the giant impact happened, this opens up all sorts of new science questions we can now begin to tackle.</p>
<p>This improvement is thanks to <a href="http://swift.dur.ac.uk/">SWIFT</a>, a new simulation code we designed to take full advantage of <a href="https://theconversation.com/so-supercomputers-are-mega-powerful-but-what-can-they-actually-do-24987">contemporary “supercomputers”</a>. These are basically lots of normal computers linked up together. So, running a big simulation quickly relies on dividing up the calculations between all parts of the supercomputer.</p>
<p>SWIFT estimates how long each computing task in the simulation will take and tries to carefully share the work evenly for maximum efficiency. Just like a big new telescope, this jump to 1,000 times higher resolution reveals details we have never seen before. </p>
<h2>Exoplanets and beyond</h2>
<p>As well as learning more about the specific history of Uranus, another important motivation is understanding planet formation more generally. In recent years, we have discovered that the most <a href="https://theconversation.com/more-than-1-000-new-exoplanets-discovered-but-still-no-earth-twin-59274">common type of exoplanets</a> (planets that orbit stars other than our sun) <a href="http://adsabs.harvard.edu/abs/2013ApJ...766...81F">are quite similar to Uranus and Neptune</a>. So everything we learn about the possible evolution of our own ice giants feeds in to our understanding of their far distant cousins and the evolution of potentially habitable worlds.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/254706/original/file-20190121-100261-19grcwz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/254706/original/file-20190121-100261-19grcwz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/254706/original/file-20190121-100261-19grcwz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/254706/original/file-20190121-100261-19grcwz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/254706/original/file-20190121-100261-19grcwz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/254706/original/file-20190121-100261-19grcwz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/254706/original/file-20190121-100261-19grcwz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Uranus seen by Voyager 2.</span>
<span class="attribution"><span class="source">NASA/JPL-Caltech</span></span>
</figcaption>
</figure>
<p>One exciting detail we studied that is very relevant to the question of extraterrestrial life is the fate of an atmosphere after a giant impact. Our high resolution simulations reveal that some of the atmosphere that survives the initial collision can still be removed by the subsequent violent bulging of the planet. The lack of an atmosphere makes a planet a lot less likely to host life. Then again, perhaps the massive energy input and added material might help create useful chemicals for life as well. Rocky material from the impacting body’s core can also get mixed into the outer atmosphere. This means we can look for certain trace elements which might be indicators of similar impacts if we observe them in an exoplanet’s atmosphere.</p>
<p>Lots of questions remain about Uranus, and giant impacts in general. Even though our simulations are getting more detailed, we still have lots to learn. Many people are therefore calling for a new mission to Uranus and Neptune to study their strange magnetic fields, their quirky families of moons and rings and even simply what precisely they’re actually made of. </p>
<p>I would very much like to see that happen. The combination of observations, theoretical models and computer simulations will ultimately help us understand not only Uranus, but the myriad planets that fill our universe and how they came to be.</p><img src="https://counter.theconversation.com/content/109894/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jacob Kegerreis receives funding from STFC. </span></em></p>
A body at least twice as massive as the Earth smashing into Uranus could have made it lopsided, shows research.
Jacob Kegerreis, PhD Student, Computational Astronomy, Durham University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/96124
2018-05-09T06:00:27Z
2018-05-09T06:00:27Z
Budget 2018: when scientists make their case effectively, politicians listen
<p>Budget 2018 confirms that the case for funding science is being heard in Canberra.</p>
<p>Science and research are integrated in the national objectives laid down in the treasurer’s speech: to create jobs, boost health and improve the liveability of communities.</p>
<p>Many of the measures appear to have origins in proposals advanced by the science community. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/infographic-budget-2018-at-a-glance-95649">Infographic: Budget 2018 at a glance</a>
</strong>
</em>
</p>
<hr>
<h2>Lessons from Budget 2018</h2>
<p>What lessons can we take from this year’s outcome? After two years in Canberra, I haven’t discovered a magic key to the Federal coffers. But here are my general observations.</p>
<h3>Intrinsic value is not sufficient</h3>
<p>We can’t assume that the broad public support for science will translate into support for specific proposals unless we do the work to explain the benefits, including more jobs and better health. </p>
<p>Being intrinsically valuable is not sufficient. Clarity about what we can deliver is essential when science is competing with spending proposals with obvious and immediate benefits – like more hospital beds.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/science-isnt-broken-but-we-can-do-better-heres-how-95139">Science isn't broken, but we can do better: here's how</a>
</strong>
</em>
</p>
<hr>
<h3>Politicians need help</h3>
<p>It helps to remember that most politicians aren’t experts in science policy. I’ve wrestled for years with the term “national research infrastructure”. People I talk to outside the research sector simply don’t understand it. A small change to saying “national research facilities” turns the lights on. </p>
<h3>Show outcomes</h3>
<p>It’s important for politicians to see the outcomes of public investment. They see the dollar figures in the budget papers but they don’t necessarily connect the research breakthroughs they read about in the newspapers years later to the programs that made them possible. It is important to help local members, irrespective of their party, recognise the impact of previously funded programs working for Australians. </p>
<h3>Review and communicate</h3>
<p>Take stock of progress and give credit to what has been achieved to date before heading back into the arena for the next round. As custodians of public funds, researchers should be proud to share their achievements with the taxpayers who ultimately make them possible.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/science-meets-parliament-doesnt-let-the-rest-of-us-off-the-hook-90692">Science Meets Parliament doesn't let the rest of us off the hook</a>
</strong>
</em>
</p>
<hr>
<h3>We’re all in this</h3>
<p>Finally, I’ve always found politicians to be far more receptive to funding proposals when they see commitment from other quarters. It’s not just the Commonwealth that needs to step up. It’s business. It’s state and territory governments. It’s philanthropists. </p>
<p>If we reach out widely, we can strengthen our advocacy with new allies, and at the same time, help government to focus on the things that only government can do.</p>
<p>Below I highlight some key areas funded through Budget 2018. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/dJDCokxPDYc?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Key science and technology items in Budget 2018, from the Australian Academy of Science.</span></figcaption>
</figure>
<h2>National facilities</h2>
<p>I welcome the emphasis on national-scale research facilities: I was Chair of the taskforce that delivered the <a href="https://www.education.gov.au/2016-national-research-infrastructure-roadmap">2016 National Research Infrastructure Roadmap</a>. </p>
<p>This year’s budget invests $1.9 billion over 12 years, adding to the $1.5 billion over ten years committed to the National Collaborative Research Infrastructure Strategy (<a href="https://www.education.gov.au/national-collaborative-research-infrastructure-strategy-ncris">NCRIS</a>) in 2015. </p>
<p>As shown below, $393.3 million is allocated in the next five years. </p>
<hr>
<iframe src="https://datawrapper.dwcdn.net/WuSo1/2/" scrolling="no" frameborder="0" allowtransparency="true" width="100%" height="215"></iframe>
<hr>
<p>I am encouraged that the government has committed to review the investment plan every two years, in recognition of the importance of keeping this discussion firmly on the national agenda.</p>
<p>In addition to these funds, the budget acts on an urgent priority flagged in the <a href="https://www.education.gov.au/2016-national-research-infrastructure-roadmap">Roadmap</a> – high performance computing. $70 million for the Pawsey Supercomputing Centre in Perth adds to the $70 million previously committed to the National Computational Infrastructure in Canberra. </p>
<p>This builds on the $119 million announced for the European Southern Observatory in the previous budget.</p>
<h2>National missions</h2>
<p>A second notable feature is the follow-through on the national missions proposed in the <a href="https://industry.gov.au/Innovation-and-Science-Australia/Australia-2030/Pages/default.aspx">Innovation and Science Australia (ISA) 2030 Plan</a>.</p>
<p>The ISA mission to preserve the Great Barrier Reef is supported by $100 million in new investment for coral reef research and restoration projects, as part of a $500 million package <a href="https://theconversation.com/500-million-for-the-great-barrier-reef-is-welcome-but-we-need-a-sea-change-in-tactics-too-95875">announced last month</a>.</p>
<p>The ISA mission to harness precision medicine and genomics to make Australia the healthiest nation in the world is backed with $500 million over the next ten years from the Medical Research Future Fund. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/four-ways-precision-medicine-is-making-a-difference-90459">Four ways precision medicine is making a difference</a>
</strong>
</em>
</p>
<hr>
<p>A scaffold for the genomics revolution was provided by the Australian Council of Learned Academies (ACOLA) in the recent <a href="https://acola.org.au/wp/pmed/">Precision Medicine Horizon Scanning report</a>, commissioned by the Commonwealth Science Council.</p>
<p>A forthcoming Horizon Scanning report, <a href="http://www.chiefscientist.gov.au/advice-to-government/horizon-scanning/">on artificial intelligence</a>, will likewise inform the $30 million commitment to AI and machine learning in the 2018 budget. The funding includes a national ethics framework for AI – a welcome development that will position Australia well in the global AI standards debate. </p>
<hr>
<iframe src="https://datawrapper.dwcdn.net/F4bnD/2/" scrolling="no" frameborder="0" allowtransparency="true" width="100%" height="191"></iframe>
<hr>
<p>More broadly, the budget acts on priorities that scientists have championed for years.</p>
<p>There is $41 million for a National Space Agency, including a $15 million fund for International Space Investment. </p>
<hr>
<iframe src="https://datawrapper.dwcdn.net/Xspft/2/" scrolling="no" frameborder="0" allowtransparency="true" width="100%" height="191"></iframe>
<hr>
<p>Over four years, $36 million will be provided for the Antarctic science program.</p>
<p>An amount of $4.5 million over four years is aimed to encourage more women into STEM education and careers, including a decadal plan for women in science. </p>
<p>With a focus on GPS technology, $225 million is allocated over four years to improve the accuracy of satellite navigation, and $37 million over three years for Digital Earth Australia. The goal of this funding is to make satellite data accessible for research, regional Australia and business. </p>
<hr>
<iframe src="https://datawrapper.dwcdn.net/IUb7e/2/" scrolling="no" frameborder="0" allowtransparency="true" width="100%" height="374"></iframe>
<hr>
<p>There is also $20 million for an Asian Innovation Strategy, including an extension of the Australia-India Strategic Research Fund for four years.</p>
<h2>Business innovation</h2>
<p>In the business arena, changes to address integrity and additionality (that is, driving R&D to levels beyond “business as usual”) in the Research and Development Tax Incentive (<a href="https://www.business.gov.au/assistance/research-and-development-tax-incentive/reference-groups-and-policy/rnd-tax-incentive-review">RDTI</a>) will reduce by an estimated $2.4 billion the money the scheme delivers to industry.</p>
<p>As one of the authors of the “3Fs” review of the RDTI – with Bill Ferris and John Fraser – I support the rebalancing of Australia’s business innovation budget. We are a global outlier in our heavy reliance on the indirect pull-through achieved through the tax system, instead of mission-driven direct investment. </p>
<p>With money recouped from the RDTI, scientists and research-intensive businesses should be making the case for more and better-targeted programs. Work remains to be done.</p><img src="https://counter.theconversation.com/content/96124/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alan Finkel does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Many Budget 2018 measures appear to have origins in proposals advanced by the science community.
Alan Finkel, Australia’s Chief Scientist, Office of the Chief Scientist
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/82647
2017-08-22T19:22:27Z
2017-08-22T19:22:27Z
Hype and cash are muddying public understanding of quantum computing
<figure><img src="https://images.theconversation.com/files/182548/original/file-20170818-28120-z1cm63.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">An ion-trap used for quantum computing research in the Quantum Control Laboratory at the University of Sydney.</span> <span class="attribution"><span class="source">Michael Biercuk</span>, <span class="license">Author provided</span></span></figcaption></figure><p>It’s no surprise that quantum computing has become a media obsession. A functional and useful quantum computer would represent one of the century’s most profound technical achievements.</p>
<p>For researchers like me, <a href="http://www.economist.com/news/essays/21717782-quantum-technology-beginning-come-its-own">the excitement</a> is welcome, but some claims appearing in popular outlets can be baffling.</p>
<p>A recent <a href="https://blogs.microsoft.com/ai/2016/11/20/microsoft-doubles-quantum-computing-bet/">infusion of cash</a> and <a href="https://www.wired.com/2017/03/race-sell-true-quantum-computers-begins-really-exist/">attention</a> from the tech giants has woken the interest of analysts, who are now eager to proclaim a breakthrough moment in the development of this extraordinary technology.</p>
<p>Quantum computing is described as “just around the corner”, simply awaiting the engineering prowess and entrepreneurial spirit of the tech sector to realise its full potential. </p>
<p>What’s the truth? Are we really just a few years away from having quantum computers that can <a href="http://spectrum.ieee.org/tech-talk/computing/hardware/encryptionbusting-quantum-computer-practices-factoring-in-scalable-fiveatom-experiment">break all online security systems</a>? Now that the technology giants are engaged, do we sit back and wait for them to deliver? Is it now all “just engineering”?</p>
<h2>Why do we care so much about quantum computing?</h2>
<p>Quantum computers are machines that use the rules of <a href="https://theconversation.com/explainer-quantum-physics-570">quantum physics</a> – in other words, the physics of very small things – to <a href="https://arxiv.org/pdf/quant-ph/9809016.pdf">encode and process information</a> in new ways. </p>
<p>They exploit the unusual physics we find on these tiny scales, physics that defies our daily experience, in order to solve problems that are exceptionally challenging for “classical” computers. Don’t just think of quantum computers as faster versions of today’s computers – think of them as computers that function in a totally new way. The two are as different as an abacus and a PC.</p>
<p>They can (in principle) solve hard, high-impact questions in fields such as codebreaking, search, chemistry and physics. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/quantum-computers-could-crack-existing-codes-but-create-others-much-harder-to-break-21807">Quantum computers could crack existing codes but create others much harder to break</a>
</strong>
</em>
</p>
<hr>
<p>Chief among these is “factoring”: finding the two prime numbers, divisible only by one and themselves, which when multiplied together reach a target number. For instance, the prime factors of 15 are 3 and 5. </p>
<p>As simple as it looks, when the number to be factored becomes large, say 1,000 digits long, the problem is effectively impossible for a classical computer. The fact that this problem is so hard for any conventional computer is how we secure most internet communications, such as through <a href="https://www.ibm.com/support/knowledgecenter/SSB23S_1.1.0.13/gtps7/s7pkey.html">public-key encryption</a>.</p>
<p>Some quantum computers are known to perform factoring exponentially faster than any classical supercomputer. But competing with a supercomputer will still require a pretty sizeable quantum computer. </p>
<h2>Money changes everything</h2>
<p>Quantum computing began as a unique discipline in the late 1990s when the US government, aware of the newly discovered potential of these machines for codebreaking, began <a href="http://www.washingtonpost.com/wp-dyn/articles/A62158-2004Feb22.html">investing in university research</a> </p>
<p>The field drew together teams from all over the world, including Australia, where we now have <a href="http://equs.org/">two Centres</a> <a href="http://www.cqc2t.org/">of Excellence</a> in quantum technology (the author is part of of the Centre of Excellence for Engineered Quantum Systems). </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/182719/original/file-20170821-17144-e1hpb7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/182719/original/file-20170821-17144-e1hpb7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/182719/original/file-20170821-17144-e1hpb7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/182719/original/file-20170821-17144-e1hpb7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/182719/original/file-20170821-17144-e1hpb7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/182719/original/file-20170821-17144-e1hpb7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/182719/original/file-20170821-17144-e1hpb7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Special piping and wiring supports quantum research in the Sydney Nanoscience Hub.</span>
<span class="attribution"><span class="source">AINST</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>But the academic focus is now shifting, in part, to industry.</p>
<p>IBM has long had a <a href="https://www.research.ibm.com/ibm-q/">basic research program</a> in the field. It was recently joined by Google, who <a href="https://www.wired.com/2014/09/martinis/">invested in a University of California team</a>, and Microsoft, which has partnered with academics globally, including <a href="http://sydney.edu.au/news-opinion/news/2017/07/25/microsoft-and-university-of-sydney-forge-quantum-partnership.html">the University of Sydney</a>.</p>
<p>Seemingly smelling blood in the water, Silicon Valley venture capitalists also recently began investing in new <a href="https://www.wsj.com/articles/startups-trapped-ions-could-lead-to-better-quantum-performance-1501078451">startups</a> working to build quantum computers. </p>
<p>The media has mistakenly seen the entry of commercial players as the genesis of recent technological acceleration, rather than a <em>response</em> to these advances. </p>
<p>So now we find a variety of competing claims about the state of the art in the field, where the field is going, and who will get to the end goal – a large-scale quantum computer – first. </p>
<h2>The state of the art in the strangest of technologies</h2>
<p>Conventional computer microprocessors can have more than one billion fundamental logic elements, known as transistors. In quantum systems, the fundamental quantum logic units are known as qubits, and for now, they mostly number in the range of a dozen. </p>
<p><a href="https://quantumexperience.ng.bluemix.net/qx/community">Such devices</a> are exceptionally exciting to researchers and represent huge progress, but they are little more than toys from a practical perspective. They are not near what’s required for factoring or any other application – they’re too small and suffer too many errors, despite what the frantic headlines may promise.</p>
<p>For instance, it’s not even easy to answer the question of which system has the best qubits right now. </p>
<p>Consider the two dominant technologies. Teams using <a href="http://www.quantumoptics.at/en/">trapped</a> <a href="http://iontrap.umd.edu/">ions</a> have qubits that are <a href="http://www.physics.usyd.edu.au/%7Embiercuk/Research.html">resistant to errors</a>, but relatively slow. Teams using <a href="http://rsl.yale.edu/">superconducting</a> qubits (including <a href="https://www-03.ibm.com/press/us/en/pressrelease/49661.wss">IBM</a> and <a href="https://www.newscientist.com/article/mg22329861-300-google-launches-plan-to-built-its-own-quantum-computer/">Google</a>) have relatively error-prone qubits that are much faster, and may be easier to replicate in the near term. </p>
<p>Which is better? There’s no <a href="https://www.research.ibm.com/ibm-q/resources/quantum-volume.pdf">straightforward answer</a>. A quantum computer with many qubits that suffer from lots of errors is not necessarily more useful than a very small machine with very stable qubits.</p>
<p>Because quantum computers can also take different forms (general purpose versus tailored to one application), we can’t even reach agreement on which system currently has the greatest set of capabilities. </p>
<p>Similarly, there’s now seemingly endless competition over simplified metrics such as the number of qubits. <a href="http://spectrum.ieee.org/tech-talk/computing/hardware/ibm-puts-a-quantum-processor-in-the-cloud">Five</a>, 16, <a href="http://spectrum.ieee.org/tech-talk/computing/hardware/ibm-expanding-cloud-quantum-computer-10fold">soon 49</a>! The question of whether a quantum computer is useful is defined by much more than this.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/182516/original/file-20170818-28132-vwoy3f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/182516/original/file-20170818-28132-vwoy3f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/182516/original/file-20170818-28132-vwoy3f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/182516/original/file-20170818-28132-vwoy3f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/182516/original/file-20170818-28132-vwoy3f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/182516/original/file-20170818-28132-vwoy3f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/182516/original/file-20170818-28132-vwoy3f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A semiconductor qubit device mounted on a custom cryogenic printed circuit board.</span>
<span class="attribution"><span class="source">Jayne Ion/University of Sydney</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<h2>Where to from here?</h2>
<p>There’s been a media focus lately on achieving “<a href="http://spectrum.ieee.org/computing/hardware/google-plans-to-demonstrate-the-supremacy-of-quantum-computing">quantum supremacy</a>”. This is the point where a quantum computer outperforms its best classical counterpart, and reaching this would absolutely mark an important conceptual advance in quantum computing. </p>
<p>But don’t confuse “quantum supremacy” with “utility”. </p>
<p>Some quantum computer researchers are <a href="http://www.scottaaronson.com/papers/optics.pdf">seeking to devise</a> slightly arcane problems that might allow quantum supremacy to be reached with, say, 50-100 qubits – numbers reachable within the next several years. </p>
<p>Achieving quantum supremacy does not mean either that those machines will be useful, or that the path to large-scale machines will become clear.</p>
<p>Moreover, we still need to figure out how to deal with errors. Classical computers rarely suffer hardware faults – the “blue screen of death” generally comes from software bugs, rather than hardware failures. The likelihood of hardware failure is usually less than something like one in a <a href="https://www.elsevier.com/books/computer-architecture/hennessy/978-0-12-383872-8">billion-quadrillion</a>, or <a href="ftp://ftp.cs.utexas.edu/pub/dburger/papers/DSN02.pdf">10<sup>-24</sup></a> in scientific notation.</p>
<p>The best quantum computer hardware, on the other hand, typically achieves only about one in <a href="https://www.nature.com/articles/ncomms14485">10,000</a>, or 10<sup>-4.</sup> That’s 20 <em>orders of magnitude</em> worse.</p>
<h2>Is it all just engineering?</h2>
<p>We’re seeing a slow creep up in the number of qubits in the most advanced systems, and clever scientists are thinking about <a href="http://www.pnas.org/content/114/29/7555">problems</a> that might be usefully addressed with small quantum computers containing just a few hundred qubits. </p>
<p>But we still face many fundamental questions about how to build, operate or even validate the performance of the large-scale systems we sometimes hear are just around the corner. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/compute-this-the-quantum-future-is-crystal-clear-6671">Compute this: the quantum future is crystal clear</a>
</strong>
</em>
</p>
<hr>
<p>As an example, if we built a fully “<a href="http://iopscience.iop.org/article/10.1088/0034-4885/76/7/076001/meta">error-corrected</a>” quantum computer at the scale of the millions of qubits required for useful factoring, as far as we can tell, it would represent a totally new state of matter. That’s pretty fundamental.</p>
<p>At this stage, there’s no clear path to the millions of error-corrected qubits we believe are required to build a useful factoring machine. <a href="https://www.iarpa.gov/index.php/research-programs/logiq">Current global efforts</a> (in which this author is a participant) are seeking to build just one error-corrected qubit to be delivered about five years from now.</p>
<p>At the end of the day, none of the teams mentioned above are likely to build a useful quantum computer in 2017 … or 2018. But that shouldn’t cause concern when there are so many exciting questions to answer along the way.</p><img src="https://counter.theconversation.com/content/82647/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Professor Michael J. Biercuk receives funding to support research in quantum computing from US Government Agencies including the Army Research Office and IARPA, as well as the Australian Research Council and the Lockheed Martin Corporation.</span></em></p>
Quantum computing is being described as “just around the corner”. Is it?
Michael J. Biercuk, Professor of Quantum Physics and Quantum Technology, University of Sydney
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/76882
2017-05-11T14:12:50Z
2017-05-11T14:12:50Z
Twenty years on from Deep Blue vs Kasparov: how a chess match started the big data revolution
<p>On the <a href="http://en.chessbase.com/post/komodo-8-deep-blue-revisited-part-three">seventh move of the crucial deciding game</a>, black made what some now consider to have been a critical error. When black mixed up the moves for the <a href="http://www.chessgames.com/perl/chessgame?gid=1070917">Caro-Kann defence</a>, white took advantage and created a new attack by sacrificing a knight. In just 11 more moves, white had built a position so strong that black had no option but to concede defeat. The loser reacted with a cry of foul play – one of the most strident accusations of cheating ever made in a tournament, which ignited an international conspiracy theory that is <a href="http://en.chessbase.com/post/deep-blue-s-cheating-move">still questioned 20 years later</a>.</p>
<p>This was no ordinary game of chess. It’s not uncommon for a defeated player to accuse their opponent of cheating – but in this case the loser was the then world chess champion, Garry Kasparov. The victor was even more unusual: IBM supercomputer, Deep Blue.</p>
<p>In defeating Kasparov on May 11 1997, Deep Blue made history as the first computer to beat a world champion in a six-game match under standard time controls. Kasparov had won the first game, lost the second and then drawn the following three. When Deep Blue took the match by winning the final game, Kasparov refused to believe it. </p>
<p>In an echo of the <a href="http://www.slate.com/blogs/atlas_obscura/2015/08/20/the_turk_an_supposed_chess_playing_robot_was_a_hoax_that_started_an_early.html">chess automaton hoaxes</a> of the 18th and 19th centuries, Kasparov argued that the computer must actually have been controlled by a real grand master. He and his supporters believed that Deep Blue’s playing was too human to be that of a machine. Meanwhile, to many of those in the outside world who were convinced by the computer’s performance, it appeared that artificial intelligence had reached a stage where it could outsmart humanity – at least at a game that had long been considered too complex for a machine.</p>
<hr>
<p><strong><em>Listen to an <a href="https://theconversation.com/twenty-years-on-from-deep-blue-vs-kasparov-how-a-chess-match-started-the-big-data-revolution-podcast-88607">audio version</a> of this article on The Conversation’s <a href="https://theconversation.com/uk/topics/in-depth-out-loud-podcast-46082">In Depth Out Loud</a> podcast.</em></strong></p>
<iframe src="https://player.acast.com/5e29c8205aa745a456af58c8/episodes/5e29c8365aa745a456af58d6?theme=default&cover=1&latest=1" frameborder="0" width="100%" height="110px" allow="autoplay"></iframe>
<hr>
<p>Yet the reality was that Deep Blue’s victory was precisely because of its rigid, unhumanlike commitment to cold, hard logic in the face of Kasparov’s emotional behaviour. This wasn’t artificial (or real) intelligence that demonstrated our own creative style of thinking and learning, but the application of simple rules on a grand scale.</p>
<p>What the match did do, however, was signal the start of a societal shift that is gaining increasing speed and influence today. The kind of vast data processing that Deep Blue relied on is now found in nearly every corner of our lives, from the <a href="http://www.computerweekly.com/feature/How-the-financial-services-sector-uses-big-data-analytics-to-predict-client-behaviour">financial systems</a> that dominate the economy to <a href="http://www.bbc.co.uk/news/business-26613909">online dating apps</a> that try to find us the perfect partner. What started as student project, helped usher in the age of big data.</p>
<h2>A human error</h2>
<p>The basis of Kasparov’s claims went all the way back to a move the computer made in the second game of the match, the first in the competition that Deep Blue won. Kasparov had played to encourage his opponent to take a “poisoned” pawn, a sacrificial piece positioned to entice the machine into making a fateful move. This was a tactic that Kasparov had used <a href="http://www.nytimes.com/1993/09/15/arts/declining-a-draw-short-loses-to-a-kasparov-counterattack.html">against human opponents</a> in the past.</p>
<p>What surprised Kasparov was <a href="http://www.thechessmind.net/blog/2012/7/14/a-look-back-at-deeper-blue-vs-kasparov-1997game-2.html">Deep Blue’s subsequent move</a>. Kasparov called it “human-like”. John Nunn, the English chess grandmaster, described it as <a href="http://en.chessbase.com/post/komodo-8-deep-blue-revisited-part-one">“stunning” and “exceptional”</a>. The move left Kasparov riled and ultimately thrown off his strategy. He was so perturbed that he eventually walked away, forfeiting the game. Worse still, he never recovered, drawing the next three games and then making the error that led to his demise in the final game.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=598&fit=crop&dpr=1 600w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=598&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=598&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=751&fit=crop&dpr=1 754w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=751&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=751&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Open file.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/Open_file">Wikipedia</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>The move was based on the strategic advantage that a player can gain from creating an <a href="https://www.chess.com/article/view/open-files3">open file</a>, a column of squares on the board (as viewed from above) that contains no pieces. This can create an attacking route, typically for rooks or queens, free from pawns blocking the way. During <a href="http://archive.computerhistory.org/projects/chess/related_materials/oral-history/hsu.oral_history.2005.102644995/hsu.oral_history_transcript.2005.102644995.pdf">training with the grand master Joel Benjamin</a>, the Deep Blue team had learnt there was sometimes a more strategic option than opening a file and then moving a rook to it. Instead, the tactic involved piling pieces onto the file and then choosing when to open it up.</p>
<p>When the programmers learned this, they rewrote Deep Blue’s code to incorporate the moves. During the game, the computer used the position of having a potential open file to put pressure on Kasparov and force him into defending on every move. That psychological advantage eventually wore Kasparov down. </p>
<p>From the moment that Kasparov lost, <a href="http://www.bbc.co.uk/programmes/p03rq51h">speculation and conspiracy theories</a> started. The conspiracists claimed that IBM had used human intervention during the match. IBM denied this, stating that, in keeping with the rules, the only human intervention came between games to rectify bugs that had been identified during play. They also rejected the claim that the programming had been adapted to Kasparov’s style of play. Instead they had relied on the computer’s ability to search through huge numbers of possible moves.</p>
<p>IBM’s refusal of Kasparov’s request for a rematch and the subsequent dismantling of Deep Blue did nothing to quell suspicions. IBM also delayed the release of the <a href="https://www.research.ibm.com/deepblue/watch/html/c.shtml">computer’s detailed logs</a>, as Kasparov had also requested, until after the decommissioning. But the subsequent <a href="http://en.chessbase.com/post/komodo-8-deep-blue-revisited-part-one">detailed analysis</a> of the logs has added new dimensions to the story, including the understanding that Deep Blue made several big mistakes.</p>
<p>There has since been speculation that Deep Blue only triumphed because <a href="https://www.cnet.com/news/did-a-bug-in-deep-blue-lead-to-kasparovs-defeat/">of a bug in the code</a> during the first game. One of <a href="http://fivethirtyeight.com/features/rage-against-the-machines/">Deep Blue’s designers</a> has said that when a glitch prevented the computer from selecting one of the moves it had analysed, it instead made a random move that Kasparov misinterpreted as a deeper strategy.</p>
<p>He managed to win the game and the bug was fixed for the second round. But the world champion was supposedly so shaken by what he saw as the machine’s superior intelligence that he was unable to recover his composure and played too cautiously from then on. He even missed the chance to come back from the open file tactic when Deep Blue made a “<a href="http://en.chessbase.com/post/komodo-8-deep-blue-revisited-part-one">terrible blunder</a>”.</p>
<p>Whichever of these accounts of Kasparov’s reactions to the match are true, they point to the fact that his defeat was at least partly down to the frailties of human nature. He over-thought some of the machine’s moves and became unecessarily anxious about its abilities, making errors that ultimately led to his defeat. Deep Blue didn’t possess anything like the artificial intelligence techniques that today have helped computers win at far more complex games, <a href="https://theconversation.com/googles-go-triumph-is-a-milestone-for-artificial-intelligence-research-53762">such as Go</a>.</p>
<p>But even if Kasparov was more intimidated than he needed to be, there is no denying the stunning achievements of the team that created Deep Blue. Its ability to take on the world’s best human chess player was built on some incredible computing power, which launched the IBM supercomputer programme that has paved the way for some of the leading-edge technology available in the world today. What makes this even more amazing is the fact that the project started not as an exuberant project from one of the largest computer manufacturers but as a student thesis in the 1980s. </p>
<h2>Chess race</h2>
<p>When Feng-Hsiung Hsu arrived in the US from Taiwan in 1982, he can’t have imagined that he would become part of an <a href="https://books.google.co.uk/books?id=zV0W4729UqkC&printsec=frontcover&dq=%22Behind+Deep+Blue:+Building+the+Computer+that+Defeated+the+World+Chess+Champion,%22+rivalry">intense rivalry</a> between two teams that spent almost a decade vying to build the world’s best chess computer. Hsu had come to Carnegie Mellon University (CMU) in Pennsylvania to study the design of the integrated circuits that make up microchips, but he also held a longstanding <a href="http://archive.computerhistory.org/projects/chess/related_materials/oral-history/hsu.oral_history.2005.102644995/hsu.oral_history_transcript.2005.102644995.pdf">interest in computer chess</a>. He attracted the attention of the developers of Hitech, the computer that in 1988 would become the <a href="http://www.nytimes.com/1988/09/26/nyregion/for-first-time-a-chess-computer-outwits-grandmaster-in-tournament.html">first to beat a chess grand master</a>, and was asked to assist with hardware design.</p>
<p>But Hsu soon fell out with the Hitech team after discovering what he saw as an architectural flaw in their proposed design. Together with several other PhD students, he began building his own computer known as ChipTest, drawing on the architecture of Bell Laboratory’s <a href="http://link.springer.com/chapter/10.1007%2F978-1-4757-1968-0_28">chess machine, Belle</a>. ChipTest’s custom technology used what’s known as “very large-scale integration” to combine thousands of transistors onto a single chip, allowing the computer to search through 500,000 chess moves each second.</p>
<p>Although the Hitech team had a head start, Hsu and his colleagues would soon overtake them with ChipTest’s successor. Deep Thought – named after the computer in Douglas Adams’ The Hitchhiker’s Guide to the Galaxy built to find the meaning of life – combined two of Hsu’s custom processors and could analyse 720,000 moves a second. This enabled it to win the 1989 World Computer Chess Championship without losing a single game.</p>
<p>But Deep Thought hit a road block later that year when it came up against (<a href="http://www.nytimes.com/1989/10/23/nyregion/kasparov-beats-chess-computer-for-now.html">and lost to</a>) the reigning world chess champion, one Garry Kasparov. To beat the best of humanity, Hsu and his team would need to go much further. Now, however, they had the backing of computing giant IBM. </p>
<p>Chess computers work by attaching a numerical value to the position of each piece on the board using a formula known as an “<a href="https://chessprogramming.wikispaces.com/Evaluation">evaluation function</a>”. These values can then be processed and searched to determine the best move to make. Early chess computers, such as Belle and Hitech, used multiple custom chips to run the evaluation functions and then combine the results together.</p>
<p>The problem was that the communication between the chips was slow and used up a lot of processing power. What Hsu did with ChipTest was to redesign and repackage the processors into a single chip. This removed a number of processing overheads such as off-chip communication and made possible huge increases in computational speed. Whereas Deep Thought could process 720,000 moves a second, Deep Blue used large numbers of processors running the same set of calculations simultaneously to analyse 100,000,000 moves a second.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=901&fit=crop&dpr=1 600w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=901&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=901&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1132&fit=crop&dpr=1 754w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1132&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1132&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">An imposing opponent.</span>
<span class="attribution"><span class="source">Jim Gardner/Flickr</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Increasing the number of moves the computer could process was important because chess computers have traditionally used what is known as “brute force” techniques. Human players <a href="http://www.csis.pace.edu/%7Ectappert/dps/pdf/ai-chess-deep.pdf">learn from past experience</a> to instantly rule out certain moves. Chess machines, certainly at that time, did not have that capability and instead had to rely on their ability to look ahead at what could happen for every possible move. They used brute force in analysing very large numbers of moves rather than focusing on certain types of move they already knew were most likely to work. Increasing the number of moves a machine could look at in a second gave it the time to look much further into the future at where different moves would take the game.</p>
<p>By February 1996, the IBM team were ready to take on Kasparov again, this time with Deep Blue. Although it became the first machine to beat a world champion in a game under regular time controls, Deep Blue <a href="http://content.time.com/time/subscriber/article/0,33009,984304,00.html">lost the overall match</a> 4-2. Its 100,000,000 moves a second still weren’t enough to beat the human ability to strategise.</p>
<p>To up the move count, the team began upgrading the machine by exploring how they could optimise large numbers of processors working in parallel – with great success. The final machine was a 30-processor supercomputer that, more importantly, controlled 480 custom intergrated circuits designed specifically to play chess. This custom design was what enabled the team to so highly optimise the parallel computing power across the chips. The result was a new version of Deep Blue (sometimes referred to as Deeper Blue) capable of searching around <a href="https://www.theguardian.com/theguardian/2011/may/12/deep-blue-beats-kasparov-1997">200,000,000 moves per second</a>. This meant it could explore how each possible strategy would play out <a href="http://www.sciencedirect.com/science/article/pii/S0004370201001291">up to 40 or more moves</a> into the future.</p>
<h2>Parallel revolution</h2>
<p>By the time the rematch took place in New York City in May 1997, public curiosity was huge. Reporters and television cameras swarmed around the board and were rewarded <a href="http://www.telegraph.co.uk/news/matt/9885264/From-the-archive-Chess-computer-beats-Kasparov-in-19-moves.html">with a story</a> when Kasparov stormed off following his defeat and cried foul at a press conference afterwards. But the publicity around the match also helped establish a greater understanding of how far computers had come. What most people still had no idea about was how the technology behind Deep Blue would help spread the influence of computers to almost ever aspect of society by transforming the way we use data.</p>
<p>Complex computer models are today used to underpin banks’ financial systems, to design better cars and aeroplanes, and to trial new drugs. Systems that mine large datasets (often known as “<a href="https://theconversation.com/explainer-what-is-big-data-13780">big data</a>”) to look for significant patterns are involved in <a href="https://www.theguardian.com/public-leaders-network/2014/apr/17/big-data-government-public-services-expert-views">planning public services</a> such as transport or healthcare, and enable companies to <a href="https://theconversation.com/the-future-of-online-advertising-is-big-data-and-algorithms-69297">target advertising</a> to specific groups of people. </p>
<p>These are highly complex problems that require rapid processing of large and complex datasets. Deep Blue gave scientists and engineers <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/">significant insight</a> into the massively parallel multi-chip systems that have made this possible. In particular they showed the capabilities of a general-purpose computer system that controlled a large number of custom chips designed for a specific application.</p>
<p>The science of <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/transform/">molecular dynamics</a>, for example, involves studying the physical movements of molecules and atoms. Custom chip designs have enabled computers to model molecular dynamics to look ahead to see how new drugs might react in the body, just like looking ahead at different chess moves. Molecular dynamic simulations have helped <a href="http://pubs.acs.org/doi/pdf/10.1021/acs.jmedchem.5b01684">speed up the development</a> of successful drugs, such as some of those <a href="https://bmcbiol.biomedcentral.com/articles/10.1186/1741-7007-9-71">used to treat HIV</a>.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Molecular modelling.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>For very broad applications, such as modelling <a href="https://www.research.ibm.com/deepblue/learn/html/e.5.shtml">financial systems</a> and <a href="https://www.research.ibm.com/deepblue/learn/html/e.4.shtml">data mining</a>, designing custom chips for an individual task in these areas would be prohibitively expensive. But the Deep Blue project helped develop the techniques to code and manage highly parallelised systems that split a problem over a large number of processors.</p>
<p>Today, many systems for processing large amounts of data rely on graphics processing units (GPUs) instead of custom-designed chips. These were originally designed to produce images on a screen but also handle information using lots of processors in parallel. So now they are often used in <a href="http://www.nvidia.com/object/what-is-gpu-computing.html">high-performance computers</a> running large data sets and to run powerful artificial intelligence tools such <a href="https://theconversation.com/what-powers-facebook-and-googles-ai-and-how-computers-could-mimic-brains-52232">Facebook’s digital assistant</a>. There are obvious similarities with Deep Blue’s architecture here: custom chips (built for graphics) controlled by general-purpose processors to drive efficiency in complex calculations.</p>
<p>The world of chess playing machines, meanwhile, has evolved since the Deep Blue victory. Despite his experience with Deep Blue, Kasparov agreed in 2003 to take on two of the most prominent chess machines, Deep Fritz and Deep Junior. And both times he managed to avoid a defeat, although he still made errors that forced him <a href="http://www.thechessdrum.net/tournaments/Kasparov-DeepJr/">into a draw</a>. However, both machines convincingly beat their human counterparts in the <a href="https://en.wikipedia.org/wiki/Human%E2%80%93computer_chess_matches#Man_vs_Machine_World_Team_Championship_.282004.E2.80.932005.29">2004 and 2005 Man vs Machine World Team Championships</a>.</p>
<p>Junior and Fritz marked a <a href="https://books.google.co.uk/books?id=KkQBCAAAQBAJ&pg=PA30&dq=chess+machines+junior+fritz&hl=en&sa=X&ved=0ahUKEwiJsfvl3-TTAhWnJcAKHVKtAOgQ6AEILTAB#v=onepage&q=chess%20machines%20junior%20fritz&f=false">change in the approach</a> to developing systems for computer chess. Whereas Deep Blue was a custom-built computer relying on the brute force of its processors to analyse millions of moves, these new chess machines were software programs that used learning techniques to minimise the searches needed. This can beat the brute force techniques using only a desktop PC.</p>
<p>But despite this advance, we still don’t have chess machines that resembles human intelligence in the way it plays the game – they don’t need to. And, if anything, the victories of Junior and Fritz further strengthen the idea that human players lose to computers, at least in part, because of their humanity. The humans made errors, became anxious and feared for their reputations. The machines, on the other hand, relentlessly applied logical calculations to the game in their attempts to win. One day we might have computers that truly replicate human thinking, but the story of the last 20 years has been the rise of systems that are superior precisely because they are machines.</p><img src="https://counter.theconversation.com/content/76882/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mark Robert Anderson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
The in depth story of a student project that paved the way for a society-level shift in how we use computers.
Mark Robert Anderson, Professor in Computing and Information Systems, Edge Hill University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/65446
2016-09-26T15:50:23Z
2016-09-26T15:50:23Z
A supercomputer just made the world’s first AI-created film trailer – here’s how well it did
<figure><img src="https://images.theconversation.com/files/139292/original/image-20160926-31870-1y6fvz6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> <span class="attribution"><span class="source">20th Century Fox</span></span></figcaption></figure><p>More people have been talking about the trailer for the sci-fi/horror film Morgan than the movie itself. It’s partly because the commercial and <a href="https://www.theguardian.com/film/2016/sep/04/morgan-sci-fi-thriller-review-kate-mara-toby-jones-ai">critical response</a> to the film has been <a href="http://www.empireonline.com/movies/morgan/review/">less than lukewarm</a>, and partly because the clip was the first to be created entirely by artificial intelligence.</p>
<p>At the request of the filmmakers at 20th Century Fox, IBM used its <a href="http://www.ibm.com/watson/">supercomputer Watson</a> to <a href="http://www.wired.co.uk/article/ibm-watson-ai-film-trailer">build a trailer</a> from the final version of Morgan, which tells the story of an artificially created human. First Watson was fed background information on the horror genre in the form of a <a href="http://www.cbronline.com/news/big-data/analytics/horror-movie-morgan-trailer-gets-the-ibm-artificial-intelligence-treatment-4996128">hundred film trailers</a>. It used visual and aural analysis in order to identify the images, sounds, and emotions that are usually found in frightening and suspenseful trailers.</p>
<p>Watson then analysed Morgan and identified the key moments of plot action from which a trailer of the film could be generated. Only the final act of putting the sounds and images together to create the trailer required human intervention. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/gJEzuYynaiw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>So how did Watson do? The trailer features the familiar visual and narrative devices that have been the staple of horror film: the reclusive “mad” scientist, the businesslike “investigator”, the eerie soundtrack including the main theme and a lullaby that evokes themes of childhood and innocence (contrasted with images of physical violence and bloodshed others). In fact, the iconography featured in Watson’s trailer reaffirms what many film theorists say are the <a href="https://books.google.co.uk/books?id=scE4AAAAQBAJ&pg=PA127&dq=generic+conventions+of+horror+films+mad+scientist&hl=en&sa=X&ved=0ahUKEwjcipilj6zPAhWDLMAKHVpsBOUQ6AEIHDAA#v=onepage&q=generic%20conventions%20of%20horror%20films%20mad%20scientist&f=false">generic conventions of horror films</a>, based on iconic examples such as the 1931 version of Frankenstein.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/BN8K-4osNb0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>But is the purpose of a film trailer just to repeat the generic conventions that characterise a film? While some trailers clearly do this, or simply trumpet the presence of star actors, others highlight the film’s spectacular possibilities. Early film trailers often described the wonders of the emerging technology of cinema such as <a href="https://www.youtube.com/watch?v=mW6GfJ5Tvms">synchorised sound (Vitaphone)</a> <a href="https://www.youtube.com/watch?v=a-P_Ira6kgE">and Technicolor</a> and many still underline the historical moment of the film. Others focus on explaining the story and conveying <a href="https://books.google.co.uk/books?id=1xxRqocKb6cC&pg=PA30&dq=suggestive+aesthetic,+structural+and+thematic+motifs&hl=en&sa=X&ved=0ahUKEwiz7NDTl4_PAhWGBsAKHfW-BC4Q6AEIHjAA#v=onepage&q=suggestive%20aesthetic%2C%20structural%20and%20thematic%20motifs&f=false">the movie’s look, feel and themes</a>“ for the prospective audience. </p>
<h2>Capturing horror themes</h2>
<p>The Watson trailer for Morgan succeeds in identifying the aesthetic and thematic motifs of the film, as well as the emotional charges that underpin them. For example, it references a trope of the horror genre made familiar by films such as The Exorcist (1974) and The Omen (1976), which dispels the presumed innocence of children. In the Watson trailer we see this represented with images of Morgan’s first birthday contrasted with images of bloody violence. Meanwhile, the use of lines of dialogue such as "I have to say goodbye to mother” is clearly based on the supercomputer’s ability to identify Freudian themes from well known examples in the horror genre, <a href="https://www.academia.edu/12043112/The_use_of_Freudian_themes_in_Alfred_Hitchcocks_Psycho_and_Vertigo">most notably Psycho</a> (1960). </p>
<p>What Watson doesn’t do is give viewers a clear understanding of the story (or provide any of the other historical functions of Hollywood trailers). The difference becomes obvious if you compare the Watson-made trailer to with the film’s “official” (human-made) clip, which reveals three narrative threads to the storyline, as well as using many of the stock motifs identified by Watson. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/rqmHSR0bFU8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>By showing clips of three different parts of the story, the official trailer creates a series of enigmatic questions to arouse the viewers’ interest. What is kept behind the scratched glass wall? What kind of creature is the titular artificial being Morgan? Will the danger implied by the images of death be contained?</p>
<p>The Watson trailer doesn’t manage such a sophisticated retelling of the story. Based on its analysis of horror movie trailers, the supercomputer has created a striking visual and aural collage with a remarkably perceptive selection of images. But the official trailer is more than a random collection of visual and sound motifs. It is a film about the film, and is structured to communicate with its intended viewership by using a gift that the supercomputer doesn’t yet possess – the gift of narrative.</p><img src="https://counter.theconversation.com/content/65446/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Suman Ghosh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
IBM’s Watson watched hundreds of horror movie trailers and then created its own for the new film Morgan.
Suman Ghosh, Senior Lecturer in Film Studies, Bath Spa University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/57271
2016-07-25T03:34:43Z
2016-07-25T03:34:43Z
Welcome to Lab 2.0 where computers replace experimental science
<figure><img src="https://images.theconversation.com/files/119816/original/image-20160422-17369-1osfhnh.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C1198%2C896&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The Titan Supercomputer, in the US, has allowed scientists to study ice formation on wind turbines at a molecular level.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Titan_supercomputer_at_the_Oak_Ridge_National_Laboratory.jpg">Wikimedia/Oak Ridge National LaboratoryOak Ridge National Laboratory</a></span></figcaption></figure><p>We spend our lives surrounded by hi-tech materials and chemicals that make our batteries, solar cells and mobile phones work. But developing new technologies requires time-consuming, <a href="https://www.cheaptubes.com/product-category/single-walled-double-walled-carbon-nanotubes/">expensive</a> and even <a href="http://www.chemistry.auckland.ac.nz/en/for/current-students/cs-health-and-safety/examples-of-real-incidents.html">dangerous</a> experiments.</p>
<p>Luckily we now have a secret weapon that allows us to <a href="https://www.whitehouse.gov/blog/2011/06/24/materials-genome-initiative-renaissance-american-manufacturing">save time, money and risk</a> by avoiding some of these experiments: computers.</p>
<p>Thanks to <a href="https://www-ssl.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html">Moore’s law</a> and a number of developments in physics, chemistry, computer science and mathematics over the past 50 years (leading to Nobel Prizes in Chemistry in <a href="https://www.nobelprize.org/nobel_prizes/chemistry/laureates/1998/">1998</a> and <a href="https://www.nobelprize.org/nobel_prizes/chemistry/laureates/2013/">2013</a>) we can now carry out many experiments entirely on computers using modelling.</p>
<p>This lets us test chemicals, drugs and hi-tech materials on a computer before ever making them in a lab, which saves time and money and reduces risks. But to dispense with labs entirely we need computer models that will reliably give us the right answers. That’s a difficult task.</p>
<h2>A grand challenge</h2>
<p>Why so difficult? Because chemistry is the quantum mechanics of interacting electrons – usually based on <a href="https://plus.maths.org/content/schrodinger-1">Schrodinger’s equation</a> – which require enormous amounts of memory and time to model. </p>
<p>For example, to study the interaction of three water molecules, we need to store around 10<sup>80</sup> pieces of data, and do at least 10<sup>320</sup> <a href="http://infocenter.arm.com/help/topic/com.arm.doc.faqs/ka9805.html">mathematical operations</a>.</p>
<p>This basically means that when the universe ends we’d still be waiting for an answer. This is somewhat of a bottleneck.</p>
<p>But this bottleneck was broken by three major advances that allow modern computer models to approximate reality pretty well without taking billions of years.</p>
<p>Firstly, Pierre Hohenberg, Walter Kohn and Lu Jeu Sham turned the interaction problem on its head in the 1960s, <a href="http://www.nobelprize.org/nobel_prizes/chemistry/laureates/1998/kohn-lecture.pdf">greatly simplifying and improving theory</a>. </p>
<p>They showed that the electronic density – a quantum mechanical probability that is fairly easy to calculate – is all you need to determine all properties of any quantum system.</p>
<p>This is a truly remarkable result. In the case of three water molecules, their approach needs only 3,000 pieces of data and around 100 billion maths operations.</p>
<p>Secondly, in the 1970s John Pople and co-workers found a very clever way to simplify the computing method by employing mathematical and computational shortcuts. </p>
<p>This lets us use just 300 pieces of data for three water molecules. Calculations need around 100 million operations, which would take a 1975 supercomputer two seconds but can be solved <a href="http://www.phonearena.com/news/A-modern-smartphone-or-a-vintage-supercomputer-which-is-more-powerful_id57149">500 times in a second on a modern phone</a>.</p>
<p>And finally, the 1990s saw a bunch of people come up with some simple methods to approximate very complex interaction physics with surprisingly high accuracy.</p>
<p>Modern computer models are now mostly fast and mostly accurate, most of the time, for most chemistry.</p>
<h2>Quantum mechanical modelling takes off</h2>
<p>As a result, computer modelling has transformed chemistry. A quick glance through any recent chemistry journal shows that many experimental papers now include results from modelling.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/119809/original/image-20160422-17409-1uasf3y.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/119809/original/image-20160422-17409-1uasf3y.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/119809/original/image-20160422-17409-1uasf3y.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=553&fit=crop&dpr=1 600w, https://images.theconversation.com/files/119809/original/image-20160422-17409-1uasf3y.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=553&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/119809/original/image-20160422-17409-1uasf3y.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=553&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/119809/original/image-20160422-17409-1uasf3y.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=694&fit=crop&dpr=1 754w, https://images.theconversation.com/files/119809/original/image-20160422-17409-1uasf3y.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=694&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/119809/original/image-20160422-17409-1uasf3y.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=694&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Computers can show quantum mechanical details no experiment can probe. Here modelling has been used to calculate and plot the electron density of a C60 bucky ball.</span>
<span class="attribution"><span class="source">Itamblyn/Wikipedia</span></span>
</figcaption>
</figure>
<p>Density functional theory (the technical name for the most common modelling method) is a feature in more than <a href="http://journals.aps.org/rmp/pdf/10.1103/RevModPhys.87.897">15,000 scientific papers</a> published in 2015. Its impact will only continue to grow as computers and theory improve.</p>
<p>Modelling is now used to <a href="http://www.nature.com/nchem/journal/v6/n12/full/nchem.2099.html">uncover chemical mechanisms</a>, to reveal details about systems that are <a href="http://science.sciencemag.org/content/351/6279/1310.abstract">hidden from experiments</a>, and <a href="http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.102.236804">to propose novel materials</a> that can later be <a href="http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.108.155501">made in a lab</a>. </p>
<p>In a particularly exciting case, computers were able to predict that a molecule C<sub>3</sub>H+ (propynylidynium) was responsible for some <a href="http://www.scientificamerican.com/article/the-hunt-for-alien-molecules/">strange astronomical observations</a>.</p>
<p>C<sub>3</sub>H+ had never before been seen on Earth. When it was later made in a lab it behaved just as the modelling predicted.</p>
<h2>New challenges need new solutions</h2>
<p>However, the rise of graphene exposed a major flaw in existing models. </p>
<p>Graphene and similar 2D materials do not stick together in the same way as most chemicals. They are instead held together by what are known as <a href="https://www.britannica.com/science/van-der-Waals-forces">van der Waals forces</a> that are not included in standard models, making them fail in 2D systems. </p>
<p>This <a href="http://arxiv.org/abs/1206.3542">failure</a> has led to a surge of interest in computer modelling of van der Waals forces. </p>
<p>For example, I was involved in an international project that used sophisticated modelling to <a href="http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.105.196401">determine the energy gained</a> by forming graphite out of layers of graphene. This energy still cannot be determined by experiments.</p>
<p>Even more usefully, 2D materials can potentially be <a href="http://www.nature.com/nature/journal/v499/n7459/abs/nature12385.html">stacked like LEGO</a>, offering vast technological promise. But there are basically an infinite number of ways to arrange these stacks. </p>
<p>We recently developed a <a href="http://link.aps.org/doi/10.1103/PhysRevB.93.165436">fast and reliable model</a> so that a computer can churn through different arrangements very quickly to find the best stacks for a given purpose. This would be impossible in a real lab.</p>
<p>On another front, electrical charge transfer in solar cells is also difficult to study with existing techniques, making the models unreliable for an important field of green technology. </p>
<p>Even worse, highly promising (but dangerous) <a href="http://www.gizmag.com/cheap-durable-perovskite-solar-cells/41618/">lead based perovskite solar cells</a> involve van der Waals forces and charge transfer together, <a href="http://pubs.rsc.org/en/content/articlelanding/cp/2014/c3cp54479f">as shown by some colleagues and me</a>.</p>
<p>A substantial effort is underway to deal with this difficult problem, and the equally difficult (and related) magnetism and conduction problems.</p>
<h2>Things will only get better</h2>
<p>The ultimate goal of computer modelling is to replace experiments almost entirely. We can then build experiments on a computer in the same way people build things in Minecraft.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/LGkkyKZVzug?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>The computer would model the real world to allow us to save real time and money and avoid real dangerous experiments. </p>
<p>For example, the <a href="https://web.archive.org/web/20130226173844/http://www.olcf.ornl.gov/wp-content/themes/olcf/titan/Titan_BuiltForScience.pdf">Titan supercomputer</a> (pictured top) has recently been used to study non-icing surface materials at the molecular level to improve the efficiency of wind power turbines in cold climates.</p>
<p>This ultimate goal was almost met in the 1990s until the experimental scientists came up with graphene and perovskites that showed flaws in existing theories. Researchers like me continue to study, anticipate and fix these flaws so that computers can replace more challenging experiments.</p>
<p>Perhaps the 2020s will be the last decade when experiments are carried out before knowing what the answer will be. That is a certainly a model worth striving for.</p><img src="https://counter.theconversation.com/content/57271/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Timothy Gould does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Developing new technologies requires time-consuming, expensive and even dangerous experiments. But now we can carry out many experiments entirely on computers using modelling.
Timothy Gould, Lecturer in Physics, Griffith University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/60168
2016-06-01T20:16:40Z
2016-06-01T20:16:40Z
Will computers replace humans in mathematics?
<figure><img src="https://images.theconversation.com/files/124546/original/image-20160531-13810-16flhes.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Computers are coming up with proofs in mathematics that are almost impossible for a human to check.</span> <span class="attribution"><span class="source">Shutterstock/Fernando Batista</span></span></figcaption></figure><p>Computers can be valuable tools for helping mathematicians solve problems but they can also play their own part in the discovery and proof of mathematical theorems.</p>
<p>Perhaps the first major result by a computer came 40 years ago, with proof for the <a>four-color theorem</a> – the assertion that any map (with certain reasonable conditions) can be coloured with just four distinct colours.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/124539/original/image-20160531-13773-1iv50nc.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/124539/original/image-20160531-13773-1iv50nc.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/124539/original/image-20160531-13773-1iv50nc.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/124539/original/image-20160531-13773-1iv50nc.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/124539/original/image-20160531-13773-1iv50nc.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/124539/original/image-20160531-13773-1iv50nc.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/124539/original/image-20160531-13773-1iv50nc.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/124539/original/image-20160531-13773-1iv50nc.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">No more that four colours are needed in this picture to make sure that no two touching shapes share the same colour.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Four_Colour_Map_Example.svg">Wikimedia/Inductiveload</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>This was first proved by computer in 1976, although flaws were later found, and a <a href="http://www.ams.org/notices/200811/tx081101382p.pdf">corrected proof</a> was not completed until 1995.</p>
<p>In 2003, Thomas Hales, of the University of Pittsburgh, published a computer-based proof of <a href="http://experimentalmath.info/blog/2014/08/formal-proof-completed-for-keplers-conjecture-on-sphere-packing/">Kepler’s conjecture</a> that the familiar method of stacking oranges in the supermarket is the most space-efficient way of arranging equal-diameter spheres.</p>
<p>Although Hales published a proof in 2003, many mathematicians were not satisfied because the proof was accompanied by two gigabytes of computer output (a large amount at the time), and some of the computations could not be certified.</p>
<p>In response, Hales produced a <a href="http://experimentalmath.info/blog/2014/08/formal-proof-completed-for-keplers-conjecture-on-sphere-packing/">computer-verified formal proof</a> in 2014.</p>
<h2>The new kid on the block</h2>
<p>The latest development along this line is the <a href="http://www.nature.com/news/two-hundred-terabyte-maths-proof-is-largest-ever-1.19990">announcement this month in Nature</a> of a computer proof for what is known as the Boolean Pythagorean triples problem. </p>
<p>The assertion here is that the integers from one to 7,824 can be coloured either red or blue with the property that no set of three integers a, b and c that satisfy a<sup>2</sup> + b<sup>2</sup> = c<sup>2</sup> (Pythagoras’s Theorem where a, b and c form the sides of a right triangle) are all the same colour. For the integers from one to 7,825, this cannot be done.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/124463/original/image-20160530-7678-sa7jwl.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/124463/original/image-20160530-7678-sa7jwl.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/124463/original/image-20160530-7678-sa7jwl.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=360&fit=crop&dpr=1 600w, https://images.theconversation.com/files/124463/original/image-20160530-7678-sa7jwl.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=360&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/124463/original/image-20160530-7678-sa7jwl.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=360&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/124463/original/image-20160530-7678-sa7jwl.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=452&fit=crop&dpr=1 754w, https://images.theconversation.com/files/124463/original/image-20160530-7678-sa7jwl.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=452&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/124463/original/image-20160530-7678-sa7jwl.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=452&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Pythagoras’s theorem for a right-angled triangle.</span>
<span class="attribution"><span class="source">The Conversation</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Even for small integers, it is hard to find a non-monochrome colouring. For instance, if five is red then one of 12 or 13 must be blue, since 5<sup>2</sup> + 12<sup>2</sup> = 13<sup>2</sup>; and one of three or four must also be blue, since 3<sup>2</sup> + 4<sup>2</sup> = 5<sup>2</sup>. Each choice has many constraints.</p>
<p>As it turns out, the number of possible ways to colour the integers from one to 7,825 is gigantic – more than 10<sup>2,300</sup> (a one followed by 2,300 zeroes). This number is far, far greater than the number of fundamental particles in the visible universe, which is a mere <a>10<sup>85</sup></a>. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/124693/original/image-20160601-1964-12ufmt4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/124693/original/image-20160601-1964-12ufmt4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/124693/original/image-20160601-1964-12ufmt4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=524&fit=crop&dpr=1 600w, https://images.theconversation.com/files/124693/original/image-20160601-1964-12ufmt4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=524&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/124693/original/image-20160601-1964-12ufmt4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=524&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/124693/original/image-20160601-1964-12ufmt4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=658&fit=crop&dpr=1 754w, https://images.theconversation.com/files/124693/original/image-20160601-1964-12ufmt4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=658&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/124693/original/image-20160601-1964-12ufmt4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=658&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The numbers one to 7,824 can be coloured either red or blue so that no trio a, b and c that satisfies Pythagoras’s theorem are all the same colour. A white square can be either red or blue.</span>
<span class="attribution"><span class="source">Marijn Heule</span></span>
</figcaption>
</figure>
<p>But the researchers were able to sharply reduce this number by taking advantage of various symmetries and number theory properties, to “only” one trillion. The computer run to examine each of these one trillion cases required two days on 800 processors of the University of Texas’ <a>Stampede supercomputer</a>.</p>
<p>While direct applications of this result are unlikely, the ability to solve such difficult colouring problems is bound to have implications for coding and for security.</p>
<p>The Texas computation, which we estimate performed roughly 10<sup>19</sup> arithmetic operations, is still not the largest mathematical computation. A 2013 <a href="http://www.ams.org/notices/201307/rnoti-p844.pdf">computation</a> of digits of pi<sup>2</sup> by us and two IBM researchers did twice this many operations. </p>
<p>The Great Internet Mersenne Prime Search (<a href="http://www.mersenne.org">GIMPS</a>), a global network of computers search for the largest known prime numbers, routinely performs a total of <a href="https://www.sciencedaily.com/releases/2016/01/160120084917.htm">450 trillion calculations per second</a>, which every six hours exceeds the number of operations performed by the Texas calculation. </p>
<p>In computer output, though, the Texas calculation takes the cake for a mathematical computation – a staggering 200 terabytes, namely 2✕10<sup>14</sup> bytes, or 30,000 bytes for every human being on Earth.</p>
<p>How can one check such a sizeable output? Fortunately, the Boolean Pythagorean triple program produced a solution (shown in the image, above) that can be checked by a much smaller program.</p>
<p>This is akin to factoring a very large number c into two smaller factors a and b by computer, so that c = a ✕ b. It is often quite difficult to find the two factors a and b, but once found, it is a trivial task to multiply them together and verify that they work.</p>
<h2>Are mathematicians obsolete?</h2>
<p>So what do these developments mean? Are research mathematicians soon to join the ranks of <a href="http://www.nytimes.com/1997/05/12/nyregion/swift-and-slashing-computer-topples-kasparov.html">chess grandmasters</a>, <a href="http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html">Jeopardy champions</a>, <a href="http://www.geekwire.com/2016/more-layoffs-at-nordstrom/">retail clerks</a>, <a href="https://www.theguardian.com/technology/2016/feb/10/black-cab-drivers-uber-protest-london-traffic-standstill">taxi drivers</a>, <a href="http://www.cnet.com/news/driverless-truck-convoy-platoons-across-europe/">truck drivers</a>, <a href="http://www.huffingtonpost.com/entry/ibm-watson-radiology_us_55cbccf9e4b0898c48867c56">radiologists</a> and other professions threatened with obsolescence due to rapidly advancing technology?</p>
<p>Not quite. Mathematicians, like many other professionals, have for the large part embraced computation as a new mode of mathematical research, a development known as experimental mathematics, which has far-reaching implications.</p>
<p>So what exactly is experimental mathematics? It is best defined as a mode of research that employs computers as a “laboratory,” in the same sense that a physicist, chemist, biologist or engineer performs an experiment to, for example, gain insight and intuition, test and falsify conjecture, and confirm results proved by conventional means.</p>
<p>We have written on this topic at some length elsewhere – see our <a href="http://www.experimentalmath.info/books/">books</a> and <a href="https://www.carma.newcastle.edu.au/jon/papers.html#PAPERS">papers</a> for full technical details.</p>
<p>In one sense, there there is nothing fundamentally new in the experimental methodology of mathematical research. In the third century BCE, the great Greek mathematician Archimedes <a href="https://books.google.com/books?id=Vvj_AwAAQBAJ&pg=PA314#v=onepage">wrote</a>:</p>
<blockquote>
<p>For it is easier to supply the proof when we have previously acquired, by the [experimental] method, some knowledge of the questions than it is to find it without any previous knowledge.</p>
</blockquote>
<p>Galileo once reputedly wrote:</p>
<blockquote>
<p>All truths are easy to understand once they are discovered; the point is to discover them.</p>
</blockquote>
<p>Carl Friederich Gauss, 19th century mathematician and physicist, frequently employed computations to motivate his remarkable discoveries. He once wrote:</p>
<blockquote>
<p>I have the result, but I do not yet know how to get [prove] it.</p>
</blockquote>
<p>Computer-based experimental mathematics certainly has technology on its side. With every passing year, computer hardware advances with <a href="http://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html">Moore’s Law</a>, and mathematical computing software packages such as Maple, Mathematica, Sage and others become ever more powerful.</p>
<p>Already these systems are powerful enough to solve virtually any equation, derivative, integral or other task in undergraduate mathematics.</p>
<p>So while ordinary human-based proofs are still essential, the computer leads the way in assisting mathematicians to identify new theorems and chart a route to formal proof.</p>
<p>What’s more, one can argue that in many cases computations are more compelling than human-based proofs. Human proofs, after all, are subject to mistakes, oversights, and reliance on earlier results by others that may be unsound. </p>
<p><a href="http://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html">Andrew Wiles’</a> initial proof of <a href="http://simonsingh.net/books/fermats-last-theorem/the-whole-story/">Fermat’s Last Theorem</a> was later found to be flawed. This was fixed later.</p>
<p>Along this line, recently Alexander Yee and Shigeru Kondo computed <a href="http://www.numberworld.org/misc_runs/pi-12t/">12.1 trillion digits of pi</a>. To do this, they first computed somewhat more than 10 trillion base-16 digits, then they checked their computation by computing a section of base-16 digits near the end by a completely different algorithm, and compared the results. They matched perfectly.</p>
<p>So which is more reliable, a human-proved theorem hundreds of pages long, which only a handful of other mathematicians have read and verified in detail, or the Yee-Kondo result? Let’s face it, computation is arguably more reliable than proof in many cases.</p>
<h2>What does the future hold?</h2>
<p>There is every indication that research mathematicians will continue to work in respectful symbiosis with computers for the foreseeable future. Indeed, as this relationship and computer technology mature, mathematicians will become more comfortable leaving certain parts of a proof to computers. </p>
<p>This very question was discussed in a June 2014 <a href="http://experimentalmath.info/blog/2014/11/breakthrough-prize-recipients-give-math-seminar-talks/">panel discussion</a> by the five inaugural <a href="https://breakthroughprize.org/?controller=Page&action=news&news_id=18">Breakthrough Prize in Mathematics</a> recipients for mathematics. The Australian-American mathematician Terence Tao expressed their consensus in these terms:</p>
<blockquote>
<p>Computers will certainly increase in power, but I expect that much of mathematics will continue to be done with humans working with computers.</p>
</blockquote>
<p>So don’t toss your algebra textbook quite yet. You will need it!</p><img src="https://counter.theconversation.com/content/60168/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jonathan Borwein (Jon) receives funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>David H. Bailey does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Computers are increasingly used to prove mathematical theorems. So does that mean human mathematicians will become obselete?
Jonathan Borwein (Jon), Laureate Professor of Mathematics, University of Newcastle
David H. Bailey, PhD; Lawrence Berkeley Laboratory (retired) and Research Fellow, University of California, Davis
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/44594
2015-08-18T20:33:41Z
2015-08-18T20:33:41Z
Weather forecasting is about to get even better
<p>While some people might still joke about the reliability of weather forecasts, meteorologists are likely to nominate weather prediction as one of the great success stories of modern science – a crowning achievement of collaboration across many scientific and technological fields.</p>
<p>And now Australian weather forecasts are about to become even better, thanks to a new satellite and supercomputer.</p>
<p>Most of us simply take it for granted that weather can be forecast with some accuracy several days ahead. As measured by maximum and minimum temperature predictions for the next day, over 95% of forecasts issued by the Australian Bureau of Meteorology are verified as accurate to within 3 degrees Celsius, reflecting a steady improvement in science, weather observations, and computing power over the past 30 years.</p>
<p>But it’s not just about getting the maximum temperature right. Our ability to forecast important weather features has also improved dramatically over the past three decades. </p>
<p>For example, the recent <a href="http://www.abc.net.au/news/2015-07-13/new-south-wales-snow-damaging-roofs-felling-trees-ses/6614290">snowfalls in New South Wales</a>, which stretched into southeast Queensland, were highlighted nearly a week ahead by our weather models. And the wind change that was so influential on the fire behaviour on Black Saturday was predicted several days in advance. Such foresight would have been impossible over a decade ago. </p>
<h2>A brief history of weather forecasting</h2>
<p>In the pre-satellite era, the forecaster’s ability to analyse weather systems was limited by the availability of surface observations from weather stations. There were huge data gaps, such as over the Southern Ocean. Weather charts were hand-drawn, including the position of high and low pressure systems and cold fronts. High-impact weather events could catch communities by surprise.</p>
<p>In the early 1970s, the first weather satellites dramatically changed this, sending vital new information back to Earth that improved our understanding of Southern Hemisphere’s weather patterns. </p>
<p>At the same time, as supercomputing became cheaper and more powerful, numerical models began to replace the entirely manual analysis of early forecasters. </p>
<p>A forecast model solves fundamental equations of fluid dynamics and heat transfer to compute the evolution of the atmosphere with time (or “the weather”, in other words). While the basic formulae for doing this, based on Newtonian physics, have been known for almost a century, we had to wait for the growth in computing power to apply this knowledge to weather prediction.</p>
<p>The role of the forecaster continues to evolve as numerical prediction skill improves further and extra, more frequent, observations become available. </p>
<p>Ironically, rather than being challenged by limited information, modern forecasting techniques grapple with how best to assimilate the terabytes of data that flood in from all manner of observation sources and models.</p>
<h2>Our region’s next-generation satellite is now in orbit</h2>
<p>Modern meteorology is underpinned by satellites, providing real-time situational awareness, such as the position of a tropical cyclone, and the major initial input into weather models.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/92228/original/image-20150818-12389-10awnk1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/92228/original/image-20150818-12389-10awnk1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/92228/original/image-20150818-12389-10awnk1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=800&fit=crop&dpr=1 600w, https://images.theconversation.com/files/92228/original/image-20150818-12389-10awnk1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=800&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/92228/original/image-20150818-12389-10awnk1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=800&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/92228/original/image-20150818-12389-10awnk1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1005&fit=crop&dpr=1 754w, https://images.theconversation.com/files/92228/original/image-20150818-12389-10awnk1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1005&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/92228/original/image-20150818-12389-10awnk1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1005&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Monitoring typhoons is a crucial job for weather satellites.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File%3AHaiyan_2013-11-07_0120Z.jpg">NASA/Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>The main source of data comes from the polar orbiting satellites. These operate about 700 km above the Earth, roughly twice the height of the International Space Station. At these heights, the satellite observations are able to extract a vertical cross-section of the atmosphere, revealing such things as moisture, winds and temperatures.</p>
<p>Polar orbiting satellites scan the same area of the planet only twice a day which, if you are concerned about developing severe weather, is not frequent enough. For more frequent updates forecasters rely on satellites in a geostationary orbit.</p>
<p>At an altitude of 35,786 km above the Equator, a geostationary satellite has the same orbital period as the rotation of the Earth, and there is effectively no relative motion between the satellite and the ground. </p>
<p>For weather applications, this allows continuous monitoring of the area visible to the satellite, equating to roughly 40% of Earth’s surface. Visible and infra-red images of cloud cover from these satellites are familiar to most people, being routinely shown on television weather bulletins and on the Bureau of Meteorology’s <a href="http://www.bom.gov.au/australia/satellite/">website</a>.</p>
<p>In October last year, the Japanese Meteorological Agency launched the 3.5-tonne Himawari-8 satellite into geostationary orbit above the western Pacific region, the first of a new generation of advanced meteorological satellites. </p>
<p>It provides a significant increase in the spatial and temporal resolution of satellite images, increasing the spatial resolution to 500m and increasing the frequency to every ten minutes, giving forecasters rapid updates on developing meteorological conditions, particularly in areas without radar coverage. </p>
<p>A key benefit will be the ability to observe thunderstorm formation. Other benefits will be seen in the detection of tropical cyclone genesis, detection and tracking of bushfire movements using hotspot algorithms, improved observation of fog, and faster detection and analysis of volcanic eruptions.</p>
<p>From September 2015 the <a href="http://himawari8.nict.go.jp/">imagery from Himawari-8</a> will be available on the Bureau’s website.</p>
<h2>Improvements in Numerical Weather Prediction</h2>
<p>One of the biggest contributors to improved weather forecasts is the increase in supercomputing power. The Bureau’s new supercomputer – to be built by CRAY, costing A$77 million and funded by the Federal Government – will be the fastest in Australia when it becomes operational in mid-2016. </p>
<p>But it’s not a case of simply upgrading to new hardware – improved forecasts are dependent on taking advantage of the increased computing power. Over the next few years, the Bureau will use the supercomputer to implement a next-generation high-resolution weather forecasting model.</p>
<p>Weather forecasts start in the real world with data about what’s actually happening at the start of the forecast period. The data – including temperature, humidity, surface pressure and wind — collected from a variety of sources – are fed into models in a process known as data assimilation. </p>
<p>As the models improve, and more data becomes available, techniques for data assimilation must also be updated. </p>
<p>In the Southern Hemisphere, satellite data can make up more than 95% of the observational data fed into forecasting models.</p>
<p>Recently the Bureau tested a prototype forecast model in New South Wales with a resolution of 1.5 km and hourly updates. This type of high-resolution model can assimilate 10-minute data from Himawari-8, and allows us to capture thunderstorms and sea breezes that are too fine in scale for current forecast systems.</p>
<p>The forecast model takes all available observations and essentially evolves the simulated atmosphere forward in time to create the actual weather forecast.</p>
<p>The models do this by breaking the atmosphere up into small grid boxes or cells. The current regional model has cells that are 12 km wide - too large to represent individual clouds, which are typically hundreds of metres across. The model thus estimates these “sub-grid-scale” processes using physics. </p>
<p>As the resolution of the models increases, the sub-grid-scale physics has to evolve. This is both a boon and a challenge for forecasting. </p>
<h2>Get ready for new weather services</h2>
<p>Taken collectively, with the advent of the new satellite, supercomputer and advancing science, the public can expect a step change in weather forecasting services over the next decade. Improvements can be expected in near-real-time information for unfolding weather events, and improvements in lead times for forecasts that assist our warning, response and recovery efforts for severe weather.</p>
<p>As with all advances in technology, it is impossible to predict what some of the new service opportunities will be, as they connect with advances in communication and technology, but we know they’ll continue to evolve and excite, for both meteorologists and the public.</p><img src="https://counter.theconversation.com/content/44594/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Moaning about weather forecasts is almost an Australian national pastime. But weather predictions have improved a lot, and with a new satellite and supercomputer, they are about to get even more reliable.
Paul Gregory, BOM, Australian Bureau of Meteorology
Anthony Rea, Assistant Director, Observing Strategy and Operations, Australian Bureau of Meteorology
Gary Dietachmayer, Atmospheric Modelling Team Leader, Australian Bureau of Meteorology
Karl Braganza, Manager, Climate Monitoring Section, Australian Bureau of Meteorology
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/24987
2014-04-01T05:11:25Z
2014-04-01T05:11:25Z
So supercomputers are mega-powerful, but what can they actually do?
<figure><img src="https://images.theconversation.com/files/45198/original/x2hzht4d-1396284683.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Engineer on the prowl between the big black boxes</span> <span class="attribution"><span class="source">University of Edinburgh</span>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span></figcaption></figure><p>A new supercomputer, called <a href="http://www.archer.ac.uk">ARCHER</a>, has recently been launched. ARCHER is a Cray XC30, funded by <a href="http://www.epsrc.ac.uk">EPSRC</a> and <a href="http://www.nerc.ac.uk">NERC</a>. It is more than three times more powerful than its predecessor, <a href="http://www.hector.ac.uk">HECToR</a>, and is hosted by the <a href="http://www.ed.ac.uk">University of Edinburgh</a>. But what can you actually do with a supercomputer? </p>
<p>Supercomputers allow researchers to carry out experiments that would otherwise be impossible because they are too small or too large, too fast or too slow, or simply too expensive. Coupling supercomputers to large data repositories also allows researchers to solve problems by analysing Big Data and so opening doors to even more new areas.</p>
<p>The most exciting developments in computational science are the significant growth in the range of scientific and engineering problems that can be solved and the corresponding increase in the impact on the lives of all of us. Supercomputers can solve important problems for the environment, transport, health and energy.</p>
<h2>ARCHER’s research power</h2>
<p>The single largest area of science that ARCHER will be used for is the realm of chemistry and materials science. The larger computing power available is enabling researchers in this area to actually explore the chemical properties of materials in physically realistic environments rather than making approximations and using idealised systems. This step-change in modelling ability allows the scientific research to have a direct impact on our day-to-day lives much more quickly than was possible previously.</p>
<p><a href="http://en.wikipedia.org/wiki/Bioactive_glass">Bioactive silicate glasses</a> are key materials used to help restore, regenerate and repair bones and other tissues in the body. A key mechanism in their function is the fast dissolution of their surface in the biological environment – this releases key ions and promotes the repair process. </p>
<p>Using ARCHER, researchers are able to investigate this process in a realistic biological environment at the quantum scale. This has simply not been possible using previous generations of supercomputers. Understanding these processes at the quantum level is driving the design of new, improved bioactive glasses.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/45020/original/xr4s6pph-1396017329.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/45020/original/xr4s6pph-1396017329.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=275&fit=crop&dpr=1 600w, https://images.theconversation.com/files/45020/original/xr4s6pph-1396017329.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=275&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/45020/original/xr4s6pph-1396017329.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=275&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/45020/original/xr4s6pph-1396017329.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=346&fit=crop&dpr=1 754w, https://images.theconversation.com/files/45020/original/xr4s6pph-1396017329.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=346&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/45020/original/xr4s6pph-1396017329.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=346&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Atomic structures from simulations of a bioglass.</span>
<span class="attribution"><span class="source">Tilocca, Phys. Chem. Chem. Phys., 2014, 16, 3874-3880, DOI: 10.1039/C3CP54913E</span></span>
</figcaption>
</figure>
<p>Also of great importance in the medical arena, ARCHER is being used to understand the resistance of many bacteria to modern antibiotics. This is one of the most important problems in contemporary public health: it limits our ability to combat infection both through the reduced efficacy of the drugs and though the constraints placed on prescription of the drugs. </p>
<p>Researchers are using a combination of experiment and large molecular simulations to understand how, at a molecular level, mutations enable resistance to antibiotics in the causes of, among others, bacterial meningitis. Simulating these systems in realistic biological environments for the long timescales required to understand the mechanism of resistance has not previously been possible. The additional speed and capacity of ARCHER allows the researchers to gain a more detailed understanding through realistic simulations, allowing them to shorten the time between research and real impacts for everyone.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/45021/original/ryqvmryk-1396017336.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/45021/original/ryqvmryk-1396017336.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=452&fit=crop&dpr=1 600w, https://images.theconversation.com/files/45021/original/ryqvmryk-1396017336.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=452&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/45021/original/ryqvmryk-1396017336.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=452&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/45021/original/ryqvmryk-1396017336.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=568&fit=crop&dpr=1 754w, https://images.theconversation.com/files/45021/original/ryqvmryk-1396017336.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=568&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/45021/original/ryqvmryk-1396017336.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=568&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Image from a simulation of a bacterial ion channel that impacts antibiotic effectiveness.</span>
<span class="attribution"><span class="source">Chen Song et al, PNAS (2013), 110:12, 4586–4591, doi: 10.1073/pnas.1214739110</span></span>
</figcaption>
</figure>
<p>ARCHER will also play a key role in UK climate modelling research. Understanding the climate and climate change are inherently complex problems that require coupling between many different systems – atmospheric models, ocean models and land models. The complexity (and so realism) of these individual components is limited by the computing power available. </p>
<h2>Mission: Earth</h2>
<p>Using ARCHER, climate researchers will be able to run full Earth system models with the additional complexity required in, for example, modelling evaporation from land and the associated plant transpiration. These more detailed integrated models will give us a deeper understanding of the drivers of climate change, the consequences arising and what information we require to prepare properly for future changes in climate. The same techniques and models are also being used to understand climate on planets beyond the solar system.</p>
<p>These examples represent only a small sample of the science coming out of supercomputers. Computational science is also using biomechanical models to understand how dinosaurs moved; simulating the energy production of future fusion reactors; exploring new renewable energy technologies such as <a href="http://en.wikipedia.org/wiki/Dye-sensitized_solar_cell">dye-sensitised solar cells</a>; and designing quieter, more efficient aeroplanes.</p>
<p>Supercomputing continues to grow at an exponential rate. The performance of current systems has allowed exciting progress across many scientific disciplines. Computational research is no longer as limited by computer power to anything like the same extent as 10 years ago. This is allowing creative researchers to address new problems all the time. ARCHER and other similar systems will ensure research breakthroughs in a broad range of important areas with significant socioeconomic benefits for everyone.</p><img src="https://counter.theconversation.com/content/24987/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alan Simpson is employed by EPCC at the University of Edinburgh. He works on the ARCHER project that is funded by EPSRC and NERC.</span></em></p><p class="fine-print"><em><span>Andrew Turner is employed by EPCC at the University of Edinburgh. He works on the ARCHER project that is funded by EPSRC and NERC.</span></em></p>
A new supercomputer, called ARCHER, has recently been launched. ARCHER is a Cray XC30, funded by EPSRC and NERC. It is more than three times more powerful than its predecessor, HECToR, and is hosted by…
Alan Simpson, Technical Director, Edinburgh Parallel Computing Centre, The University of Edinburgh
Andrew Turner, Project Manager, Edinburgh Parallel Computing Centre, The University of Edinburgh
Licensed as Creative Commons – attribution, no derivatives.