tag:theconversation.com,2011:/uk/topics/computing-6613/articlesComputing – The Conversation2024-03-26T17:01:56Ztag:theconversation.com,2011:article/2262572024-03-26T17:01:56Z2024-03-26T17:01:56ZHow long before quantum computers can benefit society? That’s Google’s US$5 million question<figure><img src="https://images.theconversation.com/files/583117/original/file-20240320-26-rmpub2.jpg?ixlib=rb-1.1.0&rect=5%2C0%2C3828%2C2160&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/quantum-computer-black-background-3d-render-1571871052">Bartlomiej K. Wroblewski / Shutterstock</a></span></figcaption></figure><p>Google and the XPrize Foundation have launched a competition worth US$5 million (£4 million) to develop <a href="https://blog.google/technology/research/google-gesda-and-xprize-launch-new-competition-in-quantum-applications/">real-world applications for quantum computers</a> that benefit society – by speeding up progress on one of the UN Sustainable Development Goals, for example. The principles of quantum physics suggest quantum computers could perform very fast calculations on particular problems, so this competition may expand the range of applications where they have an advantage over conventional computers.</p>
<p>In our everyday lives, the way nature works can generally be described by what we call <a href="https://en.wikipedia.org/wiki/Classical_physics#:%7E:text=Classical%20physical%20concepts%20are%20often,of%20quantum%20mechanics%20and%20relativity.">classical physics</a>. But nature behaves very differently at tiny quantum scales – below the size of an atom. </p>
<p>The race to harness quantum technology can be viewed as a new industrial revolution, progressing from devices that use the properties of classical physics to those utilising the <a href="https://www.energy.gov/science/doe-explainsquantum-mechanics#:%7E:text=Quantum%20mechanics%20is%20the%20field,%E2%80%9Cwave%2Dparticle%20duality.%E2%80%9D">weird and wonderful properties of quantum mechanics</a>. Scientists have spent decades trying to develop new technologies by harnessing these properties. </p>
<p>Given how often we are told that <a href="https://projects.research-and-innovation.ec.europa.eu/en/horizon-magazine/quantum-technologies">quantum technologies</a> will revolutionise our everyday lives, you may be surprised that we still have to search for practical applications by offering a prize. However, while there are numerous examples of success using quantum properties for enhanced precision in sensing and timing, there has been a surprising lack of progress in the development of quantum computers that outdo their classical predecessors.</p>
<p>The main bottleneck holding up this development is that the software – using <a href="https://www.nature.com/articles/npjqi201523">quantum algorithms</a> –
needs to demonstrate an advantage over computers based on classical physics. This is commonly known as <a href="https://theconversation.com/what-is-quantum-advantage-a-quantum-computing-scientist-explains-an-approaching-milestone-marking-the-arrival-of-extremely-powerful-computers-213306">“quantum advantage”</a>.</p>
<p>A crucial way quantum computing differs from classical computing is in using a property known as <a href="https://spectrum.ieee.org/what-is-quantum-entanglement">“entanglement”</a>. Classical computing <a href="https://web.stanford.edu/class/cs101/bits-bytes.html">uses “bits”</a> to represent information. These bits consist of ones and zeros, and everything a computer does comprises strings of these ones and zeros. But quantum computing allows these bits to be in a <a href="https://azure.microsoft.com/en-gb/resources/cloud-computing-dictionary/what-is-a-qubit">“superposition” of ones and zeros</a>. In other words, it is as if these ones and zeros occur simultaneously in the quantum bit, or qubit.</p>
<p>It is this property which allows computational tasks to be performed all at once. Hence the belief that quantum computing can offer a significant advantage over classical computing, as it is able to perform many computing tasks at the same time. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-is-quantum-advantage-a-quantum-computing-scientist-explains-an-approaching-milestone-marking-the-arrival-of-extremely-powerful-computers-213306">What is quantum advantage? A quantum computing scientist explains an approaching milestone marking the arrival of extremely powerful computers</a>
</strong>
</em>
</p>
<hr>
<h2>Notable quantum algorithms</h2>
<p>While performing many tasks simultaneously should lead to a performance increase over classical computers, putting this into practice has proven more difficult than theory would suggest. There are actually only a few notable quantum algorithms which can perform their tasks better than those using classical physics.</p>
<figure class="align-center ">
<img alt="Quantum chips - rendering" src="https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=369&fit=crop&dpr=1 600w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=369&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=369&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=464&fit=crop&dpr=1 754w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=464&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=464&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/futuristic-cpu-quantum-processor-global-computer-1210158169">Yurchanka Siarhei / Shutterstock</a></span>
</figcaption>
</figure>
<p>The most notable are the <a href="https://www.st-andrews.ac.uk/physics/quvis/simulations_html5/sims/cryptography-bb84/Quantum_Cryptography.html">BB84 protocol</a>, developed in 1984, and <a href="https://www.nature.com/articles/s41598-021-95973-w">Shor’s algorithm</a>, developed in 1994, both of which use entanglement to outperform classical algorithms on particular tasks. </p>
<p>The BB84 protocol is a cryptographic protocol – a system for ensuring secure, private communication between two or more parties which is considered more secure than comparable classical algorithms.</p>
<p>Shor’s algorithm uses entanglement to demonstrate how current <a href="https://www.rand.org/pubs/commentary/2023/09/when-a-quantum-computer-is-able-to-break-our-encryption.html#:%7E:text=One%20of%20the%20most%20important,secure%20internet%20traffic%20against%20interception.">classical encryption protocols can be broken</a>, because they are based on the factorisation of very large numbers. <a href="https://ieeexplore.ieee.org/document/365700">There is also evidence</a> that it can perform certain calculations faster than similar algorithms designed for conventional computers. </p>
<p>Despite the superiority of these two algorithms over conventional ones, few advantageous quantum algorithms have followed. However, researchers have not given up trying to develop them. Currently, there are a couple of main directions in research.</p>
<h2>Potential quantum benefits</h2>
<p>The first is to use quantum mechanics to assist in what are called <a href="https://arxiv.org/abs/2312.02279">large-scale optimisation tasks</a>. Optimisation – finding the best or most effective way to solve a particular task – is vital in everyday life, from ensuring traffic flow runs effectively, to managing operational procedures in factory pipelines, to streaming services deciding what to recommend to each user. It seems clear that quantum computers could help with these problems.</p>
<p>If we could reduce the computational time required to perform the optimisation, it could save energy, reducing the carbon footprint of the many computers currently performing these tasks around the world and the data centres supporting them.</p>
<p>Another development that could offer wide-reaching benefits is to use quantum computation to simulate systems, such as combinations of atoms, that behave according to quantum mechanics. Understanding and predicting how quantum systems work in practice could, for example, lead to better drug design and medical treatments. </p>
<p>Quantum systems could also lead to improved electronic devices. As computer chips get smaller, quantum effects take hold, potentially reducing the devices’s performance. A better fundamental understanding of quantum mechanics could help avoid this.</p>
<p>While there has been significant investment in building quantum computers, there has been less focus on ensuring they will directly benefit the public. However, that now appears to be changing.</p>
<p>Whether we will all have quantum computers in our homes within the next 20 years remains doubtful. But, given the current financial commitment to making quantum computation a practical reality, it seems that society is finally in a better position to make use of them. What precise form will this take? There’s US$5 million dollars on the line to find out.</p><img src="https://counter.theconversation.com/content/226257/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Adam Lowe does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Quantum computing has huge promise from a technical perspective, but the practical benefits are less clear.Adam Lowe, Lecturer, School of Computer Science and Digital Technologies, Aston UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2244382024-03-04T13:41:42Z2024-03-04T13:41:42ZDemand for computer chips fuelled by AI could reshape global politics and security<figure><img src="https://images.theconversation.com/files/578585/original/file-20240228-18-rudxyy.jpg?ixlib=rb-1.1.0&rect=28%2C0%2C6361%2C3592&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/close-silicon-die-being-extracted-semiconductor-2262331365">IM Imagery / Shutterstock</a></span></figcaption></figure><p>A global race to build powerful computer chips that are essential for the next generation of artificial intelligence (AI) tools could have a major impact on global politics and security. </p>
<p>The US is currently leading the race in the design of these chips, also known as semiconductors. But most of the manufacturing is carried out in Taiwan. The debate has been fuelled by the call by Sam Altman, CEO of ChatGPT’s developer OpenAI, for <a href="https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0">a US$5 trillion to US$7 trillion</a> (£3.9 trillion to £5.5 trillion) global investment to <a href="https://venturebeat.com/ai/sam-altman-wants-up-to-7-trillion-for-ai-chips-the-natural-resources-required-would-be-mind-boggling/">produce more powerful chips</a> for the next generation of AI platforms. </p>
<p>The amount of money Altman called for is more than the chip industry has spent in total since it began. Whatever the facts about those numbers, overall projections for the AI market are mind blowing. The data analytics company GlobalData <a href="https://www.globaldata.com/media/technology/generative-ai-will-go-mainstream-2024-driven-adoption-specialized-custom-models-multimodal-tool-experimentation-says-globaldata/">forecasts that the market will be worth US$909 billion</a> by 2030.</p>
<p>Unsurprisingly, over the past two years, the US, China, Japan and several European countries have increased their budget allocations and put in place measures to secure or maintain a share of the chip industry for themselves. China is catching up fast and is <a href="https://thediplomat.com/2023/09/china-boosts-semiconductor-subsidies-as-us-tightens-restrictions/">subsidising chips, including next-generation ones for AI</a>, by hundreds of billions over the next decade to build a manufacturing supply chain. </p>
<p>Subsidies seem to be the <a href="https://www.reuters.com/technology/germany-earmarks-20-bln-eur-chip-industry-coming-years-2023-07-25/">preferred strategy for Germany too</a>. The UK government has announced its <a href="https://www.ukri.org/news/100m-boost-in-ai-research-will-propel-transformative-innovations/#:%7E:text=%C2%A3100m%20boost%20in%20AI%20research%20will%20propel%20transformative%20innovations,-6%20February%202024&text=Nine%20new%20research%20hubs%20located,help%20to%20define%20responsible%20AI.">plans to invest £100 million</a> to support regulators and universities in addressing challenges around artificial intelligence. </p>
<p>The economic historian Chris Miller, the author of the book Chip War, <a href="https://www.dw.com/en/ai-chip-race-fears-grow-of-huge-financial-bubble/a-68272265">has talked about how powerful chips have become a “strategic commodity”</a> on the global geopolitical stage.</p>
<p>Despite the efforts by several countries to invest in the future of chips, there is currently a shortage of the types currently needed for AI systems. Miller recently explained that 90% of the chips used to train, or improve, AI systems are <a href="https://www.siliconrepublic.com/future-human/chip-war-semiconductors-supply-tech-geopolitics-chris-miller">produced by just one company</a>.</p>
<p>That company is the <a href="https://www.tsmc.com/english">Taiwan Semiconductor Manufacturing Company (TSMC)</a>. Taiwan’s dominance in the chip manufacturing industry is notable because the island is also the focus for tensions between China and the US. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-microchip-industry-would-implode-if-china-invaded-taiwan-and-it-would-affect-everyone-206335">The microchip industry would implode if China invaded Taiwan, and it would affect everyone</a>
</strong>
</em>
</p>
<hr>
<p>Taiwan has, for the most part, <a href="https://www.taiwan.gov.tw/content_3.php#:%7E:text=The%20ROC%20government%20relocated%20to,rule%20of%20a%20different%20government.">been independent since the middle of the 20th century</a>. However, Beijing believes it should be <a href="https://www.reuters.com/world/asia-pacific/china-calls-taiwan-president-frontrunner-destroyer-peace-2023-12-31/">reunited with the rest of China</a> and US legislation requires Washington to <a href="https://www.congress.gov/bill/96th-congress/house-bill/2479#:%7E:text=Declares%20that%20in%20furtherance%20of,defense%20capacity%20as%20determined%20by">help defend Taiwan if it is invaded</a>. What would happen to the chip industry under such a scenario is unclear, but it is obviously a focus for global concern.</p>
<p>The disruption of supply chains in chip manufacturing have the potential to bring entire industries to a halt. Access to the raw materials, such as rare earth metals, used in computer chips has also proven to be an important bottleneck. For example, China <a href="https://securityconference.org/en/publications/munich-security-report-2024/technology/">controls 60% of the production of gallium metal</a> and 80% of the global production of germanium. These are both critical raw products used in chip manufacturing.</p>
<figure class="align-center ">
<img alt="Sam Altman" src="https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">OpenAI CEO Sam Altman has called for a US$5 trillion to $7 trillion investment in chips to support the growth in AI.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/openai-ceo-sam-altman-attends-artificial-2412159621">Photosince / Shutterstock</a></span>
</figcaption>
</figure>
<p>And there are other, lesser known bottlenecks. A process called <a href="https://research.ibm.com/blog/what-is-euv-lithography">extreme ultraviolet (EUV) lithography</a> is vital for the ability to continue making computer chips smaller and smaller – and therefore more powerful. <a href="https://www.asml.com/en">A single company in the Netherlands, ASML</a>, is the only manufacturer of EUV systems for chip production.</p>
<p>However, chip factories are increasingly being built outside Asia again – something that has the potential to reduce over-reliance on a few supply chains. Plants in the US are being subsidised to the tune of <a href="https://securityconference.org/en/publications/munich-security-report-2024/technology/">US$43 billion and in Europe, US$53 billion</a>. </p>
<p>For example, the Taiwanese semiconductor manufacturer TSMC is planning to build a multibillion dollar facility in Arizona. When it opens, that factory <a href="https://theconversation.com/the-microchip-industry-would-implode-if-china-invaded-taiwan-and-it-would-affect-everyone-206335">will not be producing the most advanced chips</a> that it’s possible to currently make, many of which are still produced by Taiwan.</p>
<p>Moving chip production outside Taiwan could reduce the risk to global supplies in the event that manufacturing were somehow disrupted. But this process could take years to have a meaningful impact. It’s perhaps not surprising that, for the first time, this year’s Munich Security Conference <a href="https://securityconference.org/en/publications/munich-security-report-2024/technology/">created a chapter devoted to technology</a> as a global security issue, with discussion of the role of computer chips. </p>
<h2>Wider issues</h2>
<p>Of course, the demand for chips to fuel AI’s growth is not the only way that artificial intelligence will make major impact on geopolitics and global security. The growth of disinformation and misinformation online has transformed politics in recent years by inflating prejudices on both sides of debates. </p>
<p>We have seen it <a href="https://www.jstor.org/stable/26675075">during the Brexit campaign</a>, during <a href="https://journals.sagepub.com/doi/10.1177/20563051231177943">US presidential elections</a> and, more recently, during the <a href="https://apnews.com/article/israel-hamas-gaza-misinformation-fact-check-e58f9ab8696309305c3ea2bfb269258e">conflict in Gaza</a>. AI could be the ultimate amplifier of disinformation. Take, for example, deepfakes – AI-manipulated videos, audio or images of public figures. These could easily fool people into thinking a major <a href="https://www.theguardian.com/us-news/2024/feb/26/ai-deepfakes-disinformation-election">political candidate had said something they didn’t</a>.</p>
<p>As a sign of this technology’s growing importance, at the 2024 Munich Security Conference, 20 of the world’s largest tech companies <a href="https://news.microsoft.com/2024/02/16/technology-industry-to-combat-deceptive-use-of-ai-in-2024-elections/">launched something called the “Tech Accord”</a>. In it, they pledged to cooperate to create tools to spot, label and debunk deepfakes. </p>
<p>But should such important issues be left to tech companies to police? Mechanisms such as the EU’s Digital Service Act, the UK’s Online Safety Bill as well as frameworks to regulate AI itself should help. But it remains to be seen what impact they can have on the issue.</p>
<p>The issues raised by the chip industry and the growing demand driven by AI’s growth are just one way that AI is driving change on the global stage. But it remains a vitally important one. National leaders and authorities must not underestimate the influence of AI. Its potential to redefine geopolitics and global security could exceed our ability to both predict and plan for the changes.</p><img src="https://counter.theconversation.com/content/224438/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alina Vaduva is affiliated with the Labour Party, as a member and elected councillor in Dartford, Kent. </span></em></p><p class="fine-print"><em><span>Kirk Chang does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The effects of AI’s growth on global security could be difficult to predict.Kirk Chang, Professor of Management and Technology, University of East LondonAlina Vaduva, Director of the Business Advice Centre for Post Graduate Students at UEL, Ambassador of the Centre for Innovation, Management and Enterprise, University of East LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2193562024-01-25T16:01:30Z2024-01-25T16:01:30ZSpreadsheet errors can have disastrous consequences – yet we keep making the same mistakes<figure><img src="https://images.theconversation.com/files/570338/original/file-20240119-21-5frvd3.jpg?ixlib=rb-1.1.0&rect=75%2C0%2C8386%2C5573&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Making mistakes with spreadsheets can not only cause us personal frustration but can also lead to some very serious consequences. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/sad-tired-medical-coding-bill-spreadsheets-2197496803">Andrey_Popov/Shutterstock</a></span></figcaption></figure><p>Spreadsheet blunders aren’t just frustrating personal inconveniences. They can have serious consequences. And in the last few years alone, there have been a myriad of spreadsheet horror stories. </p>
<p>In August 2023, the Police Service of Northern Ireland <a href="https://www.bbc.co.uk/news/uk-northern-ireland-66445452">apologised</a> for a data leak of “monumental proportions” when a spreadsheet that contained statistics on the number of officers it had and their rank was shared online in response to a freedom of information request. </p>
<p>There was a second overlooked tab on the spreadsheet that contained the personal details of 10,000 serving police officers. </p>
<p>A <a href="https://anro.wm.hee.nhs.uk/Portals/3/Anaesthetics%20Recruitment%20-%20Significant%20Incident%20Report%20-%20Dec%2021.pdf?ver=hqDrm_-syzeLmBcfbigWJA%3D%3D">series of spreadsheet errors</a> disrupted the recruitment of trainee anaesthetists in Wales in late 2021. The Anaesthetic National Recruitment Office (ANRO), the body responsible for their selection and recruitment, told all the candidates for positions in Wales they were “unappointable”, despite some of them achieving the highest interview scores.</p>
<p>The blame fell on the process of consolidating interview data. Spreadsheets from different areas lacked standardisation in formatting, naming conventions and overall structure. To make matters worse, data was manually copied and pasted between various spreadsheets, a time-consuming and error-prone process.</p>
<p>ANRO only discovered the blunder when rejected applicants questioned their dismissal letters. The fact that not a single candidate seemed acceptable for Welsh positions should have been a red flag. No testing or validation was apparently applied to the crucial spreadsheet, a simple step that could have prevented this critical error.</p>
<p>In 2021, Crypto.com, an online provider of cryptocurrency, <a href="https://www.theguardian.com/australia-news/2023/sep/24/a-crypto-firm-sent-a-disability-worker-10m-by-mistake-months-later-she-was-arrested-at-an-australian-airport">accidentally transferred</a> US$10.5 million (£8.3 million) instead of US$100 into the account of an Australian customer due to an incorrect number being entered on a spreadsheet. </p>
<p>The clerk who processed the refund for the Australian customer had wrongly entered her bank account number in the refund field in a spreadsheet. It was seven months before the mistake was spotted. The recipient attempted to flee to Malaysia but was stopped at an Australian airport carrying a large amount of cash.</p>
<p>In 2022, Íslandsbanki, a state-owned Icelandic bank, sold a portion of shares that were badly undervalued due to a <a href="https://www.bloomberg.com/news/articles/2022-11-14/bungled-excel-sheet-hurts-profits-from-islandsbanki-sale">spreadsheet error</a>. When consolidating assets from different spreadsheets, the spreadsheet data was not “cleaned” and formatted properly. The bank’s shares were subsequently undervalued by as much as £16 million. </p>
<h2>The dark matter of corporate IT</h2>
<p>The above is just a fraction of the spreadsheet errors that are regularly made by various organisations. </p>
<p>Spreadsheets represent unknown risks in the form of errors, privacy violations, trade secrets and compliance violations. Yet they are also critical for the way many organisations make their decisions. For this reason, they have been <a href="https://www.igi-global.com/article/end-user-computing/81295">described</a> by experts as the “dark matter” of corporate IT. </p>
<p>Industry <a href="https://www.igi-global.com/article/know-spreadsheet-errors/55750">studies</a> show that 90% of spreadsheets containing more than 150 rows have at least one major mistake. </p>
<p>This is understandable because spreadsheet errors are easy to make but difficult to spot. My <a href="https://aisel.aisnet.org/cais/vol25/iss1/34/">own research</a> has shown that inspecting the spreadsheet’s code is the most effective way of debugging them, but this approach still only catches between 60% and 80% of all errors. </p>
<figure class="align-center ">
<img alt="A close up of Microsoft Excel spreadsheet." src="https://images.theconversation.com/files/570385/original/file-20240119-15-gebegy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/570385/original/file-20240119-15-gebegy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/570385/original/file-20240119-15-gebegy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/570385/original/file-20240119-15-gebegy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/570385/original/file-20240119-15-gebegy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/570385/original/file-20240119-15-gebegy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/570385/original/file-20240119-15-gebegy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">As many as 9 out of 10 spreadsheets are estimated to contain errors.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/new-york-usa-august-18-2017-699112366">PixieMe/Shutterstock</a></span>
</figcaption>
</figure>
<p>Spreadsheets’ appeal doesn’t just exist in the financial world. They are indispensable in <a href="https://eusprig.org/wp-content/uploads/1801.10231.pdf">engineering</a>, <a href="https://ijcis.net/index.php/ijcis/article/view/79">data science</a> and even in <a href="https://ntrs.nasa.gov/citations/20150008644">sending robots</a> to Mars. The key to their success is their flexibility. </p>
<p>Spreadsheet software is constantly evolving, with more features becoming available that increase their appeal. For instance, you can now automate many tasks in Excel (the most popular spreadsheet software) using Python scripting.</p>
<p>But given all of the aforementioned problems, isn’t it time for Excel and other spreadsheet software to be sidelined in favour of something more reliable? </p>
<h2>Human error</h2>
<p>The underlying cause of these spreadsheet problems is not the software but human error. </p>
<p>The issue is that most users don’t see the need to plan or test their work. Most users <a href="https://www.igi-global.com/article/errors-operational-spreadsheets/4145">describe</a> their first step in creating a new spreadsheet as merely jumping straight in and entering numbers or code directly. </p>
<p>Many of us don’t consider spreadsheets to warrant serious consideration. This means we become <a href="https://eusprig.org/wp-content/uploads/0804.0941.pdf">complacent</a> and assume there is no need to test, validate or verify our work.</p>
<p><a href="https://www.igi-global.com/gateway/article/3762">Research</a> on “cognitive load”, the amount of mental effort required for a task, shows that building complex spreadsheets demands as much concentration as a GP making a diagnosis. This intense mental strain makes mistakes more likely. But GPs study their profession for many years before becoming qualified while most spreadsheet users are <a href="https://eusprig.org/wp-content/uploads/0803.1862.pdf">self-taught</a>. </p>
<p>To break the cycle of repeated spreadsheet errors, there are several things organisations can do. First, introducing standardisation would help to minimise confusion and mistakes. For example, this would mean consistent formatting, naming conventions and data structures across spreadsheets.</p>
<p>Second, improving training is crucial. Equipping users with the knowledge and skills to build robust and accurate spreadsheets could help them identify and avoid pitfalls. </p>
<p>Finally, fostering a culture of critical thinking towards spreadsheets is vital. This would mean encouraging users to continually question calculations, validate their data sources and double-check their work.</p><img src="https://counter.theconversation.com/content/219356/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Simon Thorne is affiliated with The European Spreadsheets Risks Interest Group</span></em></p>Spreadsheet-related errors can have serious consequences in the private and public sector. But what can we do to overcome them?Simon Thorne, Senior Lecturer in Computing and Information Systems, Cardiff Metropolitan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2205352024-01-19T13:42:02Z2024-01-19T13:42:02ZMac at 40: User experience was the innovation that launched a technology revolution<figure><img src="https://images.theconversation.com/files/569686/original/file-20240116-19-t76qy0.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C1159%2C877&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The original Macintosh computer may seem quaint today, but the way users interacted with it triggered a revolution 40 years ago.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/markgregory/35604028241"> Mark Mathosian/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA</a></span></figcaption></figure><p>Technology innovation requires solving hard technical problems, right? Well, yes. And no. As the Apple Macintosh turns 40, what began as Apple prioritizing the squishy concept of “user experience” in its 1984 flagship product is, today, clearly vindicated by its blockbuster products since.</p>
<p>It turns out that designing for usability, efficiency, accessibility, elegance and delight pays off. Apple’s market capitalization is now over US$2.8 trillion, and its brand is every bit associated with the term “design” as the best New York or Milan fashion houses are. Apple turned technology into fashion, and it did it through user experience.</p>
<p>It began with the Macintosh.</p>
<p>When Apple announced the Macintosh personal computer with a Super Bowl XVIII <a href="https://invention.si.edu/remembering-apple-s-1984-super-bowl-ad">television ad</a> on Jan. 22, 1984, it more resembled a movie premiere than a technology release. The commercial was, in fact, directed by filmmaker Ridley Scott. That’s because founder Steve Jobs knew he was not selling just computing power, storage or a desktop publishing solution. Rather, Jobs was selling a product for human beings to use, one to be taken into their homes and integrated into their lives.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/2zfqw8nhUwA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Apple’s 1984 Super Bowl commercial is as iconic as the product it introduced.</span></figcaption>
</figure>
<p>This was not about computing anymore. IBM, Commodore and Tandy did computers. As a <a href="https://scholar.google.com/citations?hl=en&user=TmZ3howAAAAJ&view_op=list_works&sortby=pubdate">human-computer interaction scholar</a>, I believe that the first Macintosh was about humans feeling comfortable with a new extension of themselves, not as computer hobbyists but as everyday people. All that “computer stuff” – circuits and wires and separate motherboards and monitors – were neatly packaged and hidden away within one sleek integrated box.</p>
<p>You weren’t supposed to dig into that box, and you didn’t need to dig into that box – not with the Macintosh. The everyday user wouldn’t think about the contents of that box any more than they thought about the stitching in their clothes. Instead, they would focus on how that box <a href="https://doi.org/10.1016/j.intcom.2010.04.002">made them feel</a>.</p>
<h2>Beyond the mouse and desktop metaphor</h2>
<p>As computers go, was the Macintosh innovative? Sure. But not for any particular computing breakthrough. The Macintosh was not the first computer to have a graphical user interface or employ the desktop metaphor: icons, files, folders, windows and so on. The Macintosh was not the first personal computer meant for home, office or educational use. It was not the first computer to use a mouse. It was not even the first computer from Apple to be or have any of these things. The <a href="https://doi.org/10.1145/242388.242405">Apple Lisa</a>, released a year before, had them all.</p>
<p>It was not any one technical thing that the Macintosh did first. But the Macintosh brought together numerous advances that were about giving people an accessory – not for geeks or techno-hobbyists, but for home office moms and soccer dads and eighth grade students who used it to write documents, edit spreadsheets, make drawings and play games. The Macintosh revolutionized the personal computing industry and everything that was to follow because of its emphasis on providing a satisfying, simplified user experience.</p>
<p>Where computers typically had complex input sequences in the form of typed commands (Unix, MS-DOS) or multibutton mice (Xerox STAR, Commodore 64), the Macintosh used a <a href="https://everest-pipkin.com/writing/beautiful_house.pdf">desktop metaphor</a> in which the computer screen presented a representation of a physical desk surface. Users could click directly on files and folders on the desktop to open them. It also had a one-button mouse that allowed users to click, double-click and drag-and-drop icons without typing commands.</p>
<p>The <a href="https://spectrum.ieee.org/xerox-alto">Xerox Alto</a> had first exhibited the concept of icons, invented in David Canfield Smith’s <a href="https://doi.org/10.1007/978-3-0348-5744-4">1975 Ph.D. dissertation</a>. The 1981 <a href="http://dl.acm.org/citation.cfm?id=66893.66894">Xerox Star</a> and 1983 Apple Lisa had used desktop metaphors. But these systems had been slow to operate and still cumbersome in many aspects of their interaction design.</p>
<p>The Macintosh simplified the interaction techniques required to operate a computer and improved functioning to reasonable speeds. Complex keyboard commands and dedicated keys were replaced with point-and-click operations, pull-down menus, draggable windows and icons, and systemwide undo, cut, copy and paste. Unlike with the Lisa, the Macintosh could run only one program at a time, but this simplified the user experience.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/2B-XwPjn9YY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Apple CEO Steve Jobs introduced the Macintosh on Jan. 24, 1984.</span></figcaption>
</figure>
<p>The Macintosh also provided a user interface toolbox for application developers, enabling applications to have a standard look and feel by using common interface widgets such as buttons, menus, fonts, dialog boxes and windows. With the Macintosh, the learning curve for users was flattened, allowing people to feel proficient in short order. Computing, like clothing, was now for everyone.</p>
<h2>A good experience</h2>
<p>Although I hesitate to use the cliches “natural” or “intuitive” when it comes to fabricated worlds on a screen – nobody is born knowing what a desktop window, pull-down menu or double-click is – the Macintosh was the first personal computer to make user experience the driver of technical achievement. It indeed was <a href="https://www.computerhistory.org/revolution/personal-computers/17/303">simple to operate</a>, especially compared with command-line computers at the time.</p>
<p>Whereas prior systems prioritized technical capability, the Macintosh was intended for nonspecialist users – at work, school or in the home – to experience a kind of out-of-the-box usability that today is the hallmark of not only most Apple products but an entire industry’s worth of consumer electronics, smart devices and computers of every kind.</p>
<p>According to Market Growth Reports, companies devoted to providing user experience tools and services <a href="https://www.marketgrowthreports.com/global-user-experience-ux-market-26446759">were worth $548.91 million in 2023</a> and are expected to reach $1.36 billion by 2029. User experience companies provide software and services to support usability testing, user research, <a href="https://dynamics.microsoft.com/en-us/customer-voice/what-is-the-voice-of-customer/">voice-of-the-customer</a> initiatives and user interface design, among many other user experience activities.</p>
<p>Rarely today do consumer products succeed in the market based on functionality alone. Consumers <a href="https://doi.org/10.1111/j.1948-7169.2006.tb00027.x">expect a good user experience and will pay a premium for it</a>. The Macintosh <a href="https://biztechmagazine.com/article/2019/01/original-apple-macintosh-revolutionized-personal-computing">started that obsession</a> and demonstrated its centrality. </p>
<p>It is ironic that the Macintosh technology being commemorated in January 2024 was never really about technology at all. It was always about people. This is inspiration for those looking to make the next technology breakthrough, and a warning to those who would dismiss the user experience as only of secondary concern in technological innovation.</p><img src="https://counter.theconversation.com/content/220535/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>I have had two Ph.D. students receive Apple Ph.D. AI/ML Fellowships. This funding does not support me personally, but supports two of the Ph.D. students that I have advised. They obtained these fellowships through competitive submissions to Apple based on an open solicitation.</span></em></p>Apple’s phenomenal success and the field of user experience design can be traced back to the launch of the Macintosh personal computer.Jacob O. Wobbrock, Professor of Information, University of WashingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2177332024-01-02T16:49:59Z2024-01-02T16:49:59ZAI can now attend a meeting and write code for you – here’s why you should be cautious<p>Microsoft recently <a href="https://blogs.microsoft.com/blog/2023/09/21/announcing-microsoft-copilot-your-everyday-ai-companion/">launched</a> a new version of all of its software with the addition of an artificial intelligence (AI) assistant that can do a variety of tasks for you. <a href="https://adoption.microsoft.com/en-us/copilot/">Copilot</a> can summarise verbal conversations on <a href="https://support.microsoft.com/en-us/office/join-a-meeting-in-microsoft-teams-1613bb53-f3fa-431e-85a9-d6a91e3468c9">Teams</a> online meetings, present arguments for or against a particular point based on verbal discussions and answer a portion of your emails. It can even write computer code.</p>
<p>This quickly developing technology appears to take us even closer to a future where AI makes our lives easier and takes away all of the boring and repetitive things we have to do as humans. </p>
<p>But while these advancements are all very impressive and useful, we must be cautious in our use of such <a href="https://www.techopedia.com/definition/34948/large-language-model-llm">large language models</a> (LLMs). Despite their intuitive nature, they still require skill to use them effectively, reliably and safely.</p>
<h2>Large language models</h2>
<p>LLMs, a type of “deep learning” neural network, are designed to understand the user’s intent by analysing the probability of different responses based on the prompt provided. So, when a person inputs a prompt, the LLM examines the text and determines the most likely response. </p>
<p><a href="https://chat.openai.com">ChatGPT</a>, a prominent example of an LLM, can provide answers to prompts on a wide range of subjects. However, despite its seemingly knowledgeable responses, ChatGPT <a href="https://venturebeat.com/ai/llms-have-not-learned-our-language-were-trying-to-learn-theirs%EF%BF%BC/">does not</a> possess actual knowledge. Its responses are simply the most probable outcomes based on the given prompt.</p>
<p>When people provide ChatGPT, Copilot and other LLMs with detailed descriptions of the tasks they want to accomplish, these models can excel at providing high-quality responses. This could include generating text, images or computer code. </p>
<p>But, as humans, we often push the boundaries of what technology can do and what it was originally designed for. Consequently, we start using these systems to do the legwork that we should have done ourselves.</p>
<figure class="align-center ">
<img alt="The Microsoft 365 Copilot logo is displayed on a smartphone screen held in a hand." src="https://images.theconversation.com/files/562981/original/file-20231201-29-8xiuff.jpg?ixlib=rb-1.1.0&rect=53%2C8%2C6000%2C3979&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/562981/original/file-20231201-29-8xiuff.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/562981/original/file-20231201-29-8xiuff.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/562981/original/file-20231201-29-8xiuff.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/562981/original/file-20231201-29-8xiuff.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/562981/original/file-20231201-29-8xiuff.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/562981/original/file-20231201-29-8xiuff.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Microsoft Copilot is available in Windows 11 and Microsoft 365.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/june-7-2023-brazil-this-photo-2314245893">rafapress/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Why over-reliance on AI could be a problem</h2>
<p>Despite their seemingly intelligent responses, we cannot blindly <a href="https://www.scientificamerican.com/article/how-can-we-trust-ai-if-we-dont-know-how-it-works/#:%7E:text=Humans%20are%20largely%20predictable%20to,make%20it%20worthy%20of%20trust.">trust</a> LLMs to be accurate or reliable. We must carefully evaluate and verify their outputs, ensuring that our initial prompts are reflected in the answers provided. </p>
<p>To effectively verify and validate LLM outputs, we need to have a strong understanding of the subject matter. Without expertise, we cannot provide the necessary quality assurance.</p>
<p>This becomes particularly critical in situations where we are using LLMs to bridge gaps in our own knowledge. Here our lack of knowledge may lead us to a situation where we are simply unable to determine whether the output is correct or not. This situation can arise in generation of text and coding. </p>
<p>Using AI to attend meetings and summarise the discussion presents obvious risks around reliability. While the record of the meeting is based on a transcript, the meeting notes are still generated in the same fashion as other text from LLMs. They are still based on language patterns and probabilities of what was said, so they require verification before they can be acted upon. </p>
<p>They also suffer from interpretation problems due to <a href="https://ieeexplore.ieee.org/abstract/document/9016769">homophones</a>, words that are pronounced the same but have different meanings. People are good at understanding what is meant in such circumstances due to the context of the conversation.</p>
<p>But AI is not good at deducing context nor does it understand nuance. So, expecting it to formulate arguments based upon a potentially erroneous transcript poses further problems still. </p>
<p>Verification is even harder if we are using AI to generate computer code. Testing computer code with test data is the only reliable method for validating its functionality. While this demonstrates that the code operates as intended, it doesn’t guarantee that its behaviour aligns with real-world expectations. </p>
<p>Suppose we use generative AI to create code for a sentiment analysis tool. The goal is to analyse product reviews and categorise sentiments as positive, neutral or negative. We can test the functionality of the system and validate the code functions correctly – that it is sound from a technical programming point of view. </p>
<p>However, imagine that we deploy such software in the real world and it starts to classify sarcastic product reviews as positive. The sentiment analysis system lacks the contextual knowledge necessary to understand that sarcasm is not used as positive feedback, and quite the opposite. </p>
<p>Verifying that a code’s output matches the desired outcomes in nuanced situations such as this requires expertise. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chatgpt-turns-1-ai-chatbots-success-says-as-much-about-humans-as-technology-218704">ChatGPT turns 1: AI chatbot's success says as much about humans as technology</a>
</strong>
</em>
</p>
<hr>
<p>Non programmers will have no knowledge of software engineering principles that are used to ensure code is correct, such as planning, methodology, testing and documentation. Programming is a complex discipline, and software engineering emerged as a field to manage software quality. </p>
<p>There is a significant risk, as my own <a href="https://www.researchgate.net/publication/372606390_Experimenting_with_ChatGPT_for_Spreadsheet_Formula_Generation_Evidence_of_Risk_in_AI_Generated_Spreadsheets#fullTextFileContent">research</a> has shown, that non-experts will overlook or skip critical steps in the software design process, leading to code of unknown quality.</p>
<h2>Validation and verification</h2>
<p>LLMs such as ChatGPT and Copilot are powerful tools that we can all benefit from. But we must be careful to not blindly trust the outputs given to us. </p>
<p>We are right at the start of a great revolution based on this technology. AI has infinite possibilities but it needs to be shaped, checked and verified. And at present, humans beings are the only ones who can do this.</p><img src="https://counter.theconversation.com/content/217733/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Simon Thorne does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Microsoft Copilot can summarise meetings and even formulate arguments. But as good as that sounds, we shouldn’t blindly trust its accuracy.Simon Thorne, Senior Lecturer in Computing and Information Systems, Cardiff Metropolitan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2198082023-12-19T03:53:18Z2023-12-19T03:53:18Z2023 was the year of generative AI. What can we expect in 2024?<figure><img src="https://images.theconversation.com/files/565414/original/file-20231213-29-11cbup.jpg?ixlib=rb-1.1.0&rect=120%2C18%2C1435%2C941&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Midjourney image by T.J. Thomson</span></span></figcaption></figure><p>In 2023, artificial intelligence (AI) truly entered our daily lives. The <a href="https://www.ofcom.org.uk/news-centre/2023/gen-z-driving-early-adoption-of-gen-ai">latest data</a> shows four in five teenagers in the United Kingdom are using generative AI tools. About <a href="https://www.techrepublic.com/article/australia-adapting-generative-ai/">two-thirds of Australian employees</a> report using generative AI for work.</p>
<p>At first, many people used these tools because they were curious about generative AI or wanted to be entertained. Now, people ask generative AI for help with studies, <a href="https://theconversation.com/move-over-agony-aunt-study-finds-chatgpt-gives-better-advice-than-professional-columnists-214274">for advice</a>, or use it to find or synthesise information. Other uses include getting help coding and making images, videos, or audio. </p>
<p>So-called “<a href="https://www.abc.net.au/news/science/2023-04-02/prompt-engineers-share-their-tips-on-using-chatgpt-generative-ai/102165132">prompt whisperers</a>” or prompt engineers offer guides on not just designing the best AI prompts, but even how to blend different AI services to achieve fantastical outputs.</p>
<p>AI uses and functions have also shifted over the past 12 months as technological development, regulation and social factors have shaped what’s possible. Here’s where we’re at, and what might come in 2024.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-to-z-all-the-terms-you-need-to-know-to-keep-up-in-the-ai-hype-age-203917">AI to Z: all the terms you need to know to keep up in the AI hype age</a>
</strong>
</em>
</p>
<hr>
<h2>AI changed how we work and pray</h2>
<p>Generative AI made waves early in the year when it was used to enter and even win <a href="https://www.australianphotography.com/news/ai-generated-image-wins-australian-photo-comp">photography competitions</a>, and tested for its ability to <a href="https://www.nbcnews.com/tech/tech-news/chatgpt-passes-mba-exam-wharton-professor-rcna67036">pass school exams</a>.</p>
<p>ChatGPT, the chatbot that’s become a household name, reached a user base of 100 million by February – about four times the size of Australia’s population. </p>
<p>Some musicians used AI <a href="https://variety.com/2023/music/news/david-guetta-eminem-artificial-intelligence-1235516924/">voice cloning</a> to create synthetic music that sounds like popular artists, such as Eminem. Google launched its chatbot, Bard. Microsoft integrated AI into Bing search. Snapchat launched MyAI, a ChatGPT-powered tool that allows users to ask questions and receive suggestions.</p>
<p>GPT-4, the latest iteration of the AI that powers ChatGPT, launched in March. This release <a href="https://openai.com/gpt-4">brought new features</a>, such as analysing documents or longer pieces of text.</p>
<p>Also in March, corporate giants like Coca-Cola began <a href="https://www.creativebloq.com/news/coca-cola-ad-masterpiece">generating ads</a> partly through AI, while Levi’s said it would use AI for creating <a href="https://www.theverge.com/2023/3/27/23658385/levis-ai-generated-clothing-model-diversity-denim">virtual models</a>. The now-infamous image of the Pope wearing a white Balenciaga puffer jacket went viral. A cohort of tech evangelists also called for an AI development pause.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-pope-francis-puffer-coat-was-fake-heres-a-history-of-real-papal-fashion-202873">The Pope Francis puffer coat was fake – here’s a history of real papal fashion</a>
</strong>
</em>
</p>
<hr>
<p>Amazon began integrating generative AI tools into its products and services in April. Meanwhile, Japan ruled there would be no <a href="https://petapixel.com/2023/06/05/japan-declares-ai-training-data-fair-game-and-will-not-enforce-copyright/">no copyright restrictions</a> for training generative AI in the country. </p>
<p>In the United States, screenwriters went on strike in May, demanding a ban of AI-generated scripts. Another AI-generated image, allegedly of <a href="https://www.npr.org/2023/05/22/1177590231/fake-viral-images-of-an-explosion-at-the-pentagon-were-probably-created-by-ai">the Pentagon on fire</a>, went viral.</p>
<p>In July, worshippers experienced some of the first <a href="https://www.csmonitor.com/USA/Society/2023/0718/Computer-generated-prayer-How-AI-is-changing-faith#:%7E:text=In%20Germany%2C%20a%20Lutheran%20church,Some%20religious%20leaders%20have%20reservations.">religious services</a> led by AI. </p>
<p>In August, two months after AI-generated summaries became available in Zoom, <a href="https://www.nbcnews.com/tech/innovation/zoom-ai-privacy-tos-terms-of-service-data-rcna98665">the company faced intense scrutiny</a> for changes to its terms of service around consumer data and AI. The company later clarified its policy and pledged not to use customers’ data without consent to train AI.</p>
<p>In September, voice and image functionalities came to ChatGPT for paid users. Adobe began <a href="https://itbrief.com.au/story/adobe-launches-photoshop-on-the-web-complete-with-genai-features">integrating generative AI</a> into its applications like Illustrator and Photoshop.</p>
<p>By December, we saw an increased shift to “<a href="https://www.forbes.com/sites/forbestechcouncil/2023/12/08/what-manufacturers-should-know-when-implementing-edge-ai/?sh=6425e17c670c">Edge AI</a>”, where AI processes are handled locally, on devices themselves, rather than in the cloud, which has benefits in contexts when privacy and security are paramount. Meanwhile, the EU announced the world’s first “<a href="https://www.eeas.europa.eu/delegations/australia/world%E2%80%99s-first-ai-law-eu-announces-provisional-agreement-ai-act_en">AI Law</a>”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-the-world-is-finally-starting-to-regulate-artificial-intelligence-what-to-expect-from-us-eu-and-chinas-new-laws-217573">AI: the world is finally starting to regulate artificial intelligence – what to expect from US, EU and China's new laws</a>
</strong>
</em>
</p>
<hr>
<h2>Where to from here?</h2>
<p>Given the whirlwind of AI developments in the past 12 months, we’re likely to see more incremental changes in the next year and beyond.</p>
<p>In particular, we expect to see changes in these four areas.</p>
<p><strong>Increased bundling of AI services and functions</strong></p>
<p>ChatGTP was initially just a chatbot that could generate text. Now, it can generate text, images and audio. Google’s Bard can now <a href="https://blog.google/products/bard/google-bard-new-features-update-sept-2023/">interface among Gmail, Docs and Drive</a>, and complete tasks across these services.</p>
<p>By bundling generative AI into existing services and combining functions, companies will try to maintain their market share and make AI services more intuitive, accessible and useful. </p>
<p>At the same time, bundled services make users more vulnerable when inevitable data breaches happen.</p>
<p><strong>Higher quality, more realistic generations</strong></p>
<p>Earlier this year, AI struggled with rendering <a href="https://www.newyorker.com/culture/rabbit-holes/the-uncanny-failures-of-ai-generated-hands">human hands and limbs</a>. By now, AI generators have markedly improved on these tasks.</p>
<p>At the same time, <a href="https://theconversation.com/ageism-sexism-classism-and-more-7-examples-of-bias-in-ai-generated-images-208748">research has found</a> how biased many AI generators can be. </p>
<p>Some developers have created <a href="https://peopleofcolorintech.com/articles/the-black-gpt-introducing-the-ai-model-trained-with-diversity-and-inclusivity-in-mind/">models</a> with diversity and inclusivity in mind. Companies will likely see a benefit in providing services that reflect the diversity of their customer bases.</p>
<p><strong>Growing calls for transparency and media standards</strong></p>
<p>Various news platforms have been <a href="https://www.theage.com.au/technology/sora-tanaka-is-a-fitness-obsessed-journalist-she-also-doesn-t-exist-20231128-p5enaj.html">slammed</a> in 2023 for producing AI-generated content without transparently communicating this. </p>
<p>AI-generated images of world leaders and other newsworthy events <a href="https://www.crikey.com.au/2023/11/01/israel-gaza-adobe-artificial-intelligence-images-fake-news/">abound on social media</a>, with high potential to mislead and deceive. </p>
<p>Media industry standards that transparently and consistently denote when AI has been used to create or augment content will need to be developed to improve public trust.</p>
<p><strong>Expansion of sovereign AI capacity</strong></p>
<p>In these early days, many have been content playfully exploring AI’s possibilities. However, as these AI tools begin to unlock rapid advancements across all sectors of our society, more fine-grained control over who governs these foundational technologies will become increasingly important.</p>
<p>In 2024, we will likely see future-focused leaders incentivising the development of their <a href="https://economictimes.indiatimes.com/tech/technology/countries-need-to-build-sovereign-ai-capabilities-ibm-ceo-arvind-krishna/articleshow/103146863.cms">sovereign capabilities</a> through increased research and development funding, training programs and other investments.</p>
<p>For the rest of us, whether you’re using generative AI for fun, work, or school, understanding the strengths and limitations of the technology is essential for using it in responsible, respectful and productive ways. </p>
<p>Similarly, understanding how others – from governments to doctors – are increasingly using AI in ways that affect you, is equally important.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/artificial-intelligence-is-already-in-our-hospitals-5-questions-people-want-answered-217374">Artificial intelligence is already in our hospitals. 5 questions people want answered</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/219808/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>T.J. Thomson receives funding from the Australian Research Council. He is an affiliate with the ARC Centre of Excellence for Automated Decision Making & Society.</span></em></p><p class="fine-print"><em><span>Daniel Angus receives funding from the Australian Research Council. He is a Chief Investigator with the ARC Centre of Excellence for Automated Decision Making & Society.</span></em></p>Generative AI has changed the ways we work, study and even pray. Here are some highlights of an astonishing year of change – and what we can expect next.T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT UniversityDaniel Angus, Professor of Digital Communication, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2200442023-12-18T16:17:12Z2023-12-18T16:17:12ZA new supercomputer aims to closely mimic the human brain — it could help unlock the secrets of the mind and advance AI<figure><img src="https://images.theconversation.com/files/566252/original/file-20231218-15-hajmbj.jpg?ixlib=rb-1.1.0&rect=19%2C9%2C6470%2C3940&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/businessman-touching-digital-human-brain-cell-582507070">Sdecoret / Shutterstock</a></span></figcaption></figure><p>A supercomputer scheduled to go online in April 2024 will rival the estimated rate of operations in the human brain, <a href="https://www.westernsydney.edu.au/newscentre/news_centre/more_news_stories/world_first_supercomputer_capable_of_brain-scale_simulation_being_built_at_western_sydney_university">according to researchers in Australia</a>. The machine, called DeepSouth, is capable of performing 228 trillion operations per second. </p>
<p>It’s the world’s first supercomputer capable of simulating networks of neurons and synapses (key biological structures that make up our nervous system) at the scale of the human brain.</p>
<p>DeepSouth belongs to an approach <a href="https://www.nature.com/articles/s43588-021-00184-y">known as neuromorphic computing</a>, which aims to mimic the biological processes of the human brain. It will be run from the International Centre for Neuromorphic Systems at Western Sydney University.</p>
<p>Our brain is the most amazing computing machine we know. By distributing its
computing power to billions of small units (neurons) that interact through trillions of connections (synapses), the brain can rival the most powerful supercomputers in the world, while requiring only the same power used by a fridge lamp bulb.</p>
<p>Supercomputers, meanwhile, generally take up lots of space and need large amounts of electrical power to run. The world’s most powerful supercomputer, the <a href="https://www.hpe.com/uk/en/compute/hpc/cray/oak-ridge-national-laboratory.html">Hewlett Packard Enterprise Frontier</a>, can perform just over one quintillion operations per second. It covers 680 square metres (7,300 sq ft) and requires 22.7 megawatts (MW) to run. </p>
<p>Our brains can perform the same number of operations per second with just 20 watts of power, while weighing just 1.3kg-1.4kg. Among other things, neuromorphic computing aims to unlock the secrets of this amazing efficiency.</p>
<h2>Transistors at the limits</h2>
<p>On June 30 1945, the mathematician and physicist <a href="https://www.ias.edu/von-neumann">John von Neumann</a> described the design of a new machine, the <a href="https://ieeexplore.ieee.org/document/194089">Electronic Discrete Variable Automatic Computer (Edvac)</a>. This effectively defined the modern electronic computer as we know it. </p>
<p>My smartphone, the laptop I am using to write this article and the most powerful supercomputer in the world all share the same fundamental structure introduced by von Neumann almost 80 years ago. <a href="https://www.sciencedirect.com/topics/computer-science/von-neumann-architecture">These all have distinct processing and memory units</a>, where data and instructions are stored in the memory and computed by a processor.</p>
<p>For decades, the number of transistors on a microchip doubled approximately every two years, <a href="https://ieeexplore.ieee.org/abstract/document/591665">an observation known as Moore’s Law</a>. This allowed us to have smaller and cheaper computers. </p>
<p>However, transistor sizes are now approaching the atomic scale. At these tiny sizes, excessive heat generation is a problem, as is a phenomenon called quantum tunnelling, which interferes with the functioning of the transistors. <a href="https://qz.com/852770/theres-a-limit-to-how-small-we-can-make-transistors-but-the-solution-is-photonic-chips#:%7E:text=They're%20made%20of%20silicon,we%20can%20make%20a%20transistor.">This is slowing down</a> and will eventually halt transistor miniaturisation.</p>
<p>To overcome this issue, scientists are exploring new approaches to
computing, starting from the powerful computer we all have hidden in our heads, the human brain. Our brains do not work according to John von Neumann’s model of the computer. They don’t have separate computing and memory areas. </p>
<p>They instead work by connecting billions of nerve cells that communicate information in the form of electrical impulses. Information can be passed from <a href="https://qbi.uq.edu.au/brain-basics/brain/brain-physiology/action-potentials-and-synapses">one neuron to the next through a junction called a synapse</a>. The organisation of neurons and synapses in the brain is flexible, scalable and efficient. </p>
<p>So in the brain – and unlike in a computer – memory and computation are governed by the same neurons and synapses. Since the late 1980s, scientists have been studying this model with the intention of importing it to computing.</p>
<figure class="align-center ">
<img alt="Microchip." src="https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The continuing miniaturisation of transistors on microchips is limited by the laws of physics.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/close-presentation-new-generation-microchip-gloved-691548583">Gorodenkoff / Shutterstock</a></span>
</figcaption>
</figure>
<h2>Imitation of life</h2>
<p>Neuromorphic computers are based on intricate networks of simple, elementary processors (which act like the brain’s neurons and synapses). The main advantage of this is that these machines <a href="https://www.electronicsworld.co.uk/advances-in-parallel-processing-with-neuromorphic-analogue-chip-implementations/34337/">are inherently “parallel”</a>. </p>
<p>This means that, <a href="https://www.pnas.org/doi/full/10.1073/pnas.95.3.933">as with neurons and synapses</a>, virtually all the processors in a computer can potentially be operating simultaneously, communicating in tandem.</p>
<p>In addition, because the computations performed by individual neurons and synapses are very simple compared with traditional computers, the energy consumption is orders of magnitude smaller. Although neurons are sometimes thought of as processing units, and synapses as memory units, they contribute to both processing and storage. In other words, data is already located where the computation requires it.</p>
<p>This speeds up the brain’s computing in general because there is no separation between memory and processor, which in classical (von Neumann) machines causes a slowdown. But it also avoids the need to perform a specific task of accessing data from a main memory component, as happens in conventional computing systems and consumes a considerable amount of energy. </p>
<p>The principles we have just described are the main inspiration for DeepSouth. This is not the only neuromorphic system currently active. It is worth mentioning the <a href="https://www.humanbrainproject.eu">Human Brain Project (HBP)</a>, funded under an <a href="https://ec.europa.eu/futurium/en/content/fet-flagships.html">EU initiative</a>. The HBP was operational from 2013 to 2023, and led to BrainScaleS, a machine located in Heidelberg, in Germany, that emulates the way that neurons and synapses work. </p>
<p><a href="https://www.humanbrainproject.eu/en/science-development/focus-areas/neuromorphic-computing/hardware/">BrainScaleS</a> can simulate the way that neurons “spike”, the way that an electrical impulse travels along a neuron in our brains. This would make BrainScaleS an ideal candidate to investigate the mechanics of cognitive processes and, in future, mechanisms underlying serious neurological and neurodegenerative diseases.</p>
<p>Because they are engineered to mimic actual brains, neuromorphic computers could be the beginning of a turning point. Offering sustainable and affordable computing power and allowing researchers to evaluate models of neurological systems, they are an ideal platform for a range of applications. They have the potential to both advance our understanding of the brain and offer new approaches to artificial intelligence.</p><img src="https://counter.theconversation.com/content/220044/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Domenico Vicinanza does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Neuromorphic computers aim to one day replicate the amazing efficiency of the brain.Domenico Vicinanza, Associate Professor of Intelligent Systems and Data Science, Anglia Ruskin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2133062023-11-17T13:29:43Z2023-11-17T13:29:43ZWhat is quantum advantage? A quantum computing scientist explains an approaching milestone marking the arrival of extremely powerful computers<figure><img src="https://images.theconversation.com/files/559476/original/file-20231114-21-dv3rca.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5731%2C3829&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">IBM's quantum computer got President Joe Biden's attention.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/president-joe-biden-looks-at-quantum-computer-as-he-tours-news-photo/1243772280">Mandel Ngan/AFP via Getty Images</a></span></figcaption></figure><p>Quantum advantage is the milestone the field of quantum computing is fervently working toward, where a quantum computer can solve problems that are beyond the reach of the most powerful non-quantum, or classical, computers. </p>
<p>Quantum refers to the scale of atoms and molecules where the laws of physics as we experience them break down and a different, counterintuitive set of laws apply. Quantum computers take advantage of these strange behaviors to solve problems.</p>
<p>There are some types of problems that are <a href="https://theconversation.com/limits-to-computing-a-computer-scientist-explains-why-even-in-the-age-of-ai-some-problems-are-just-too-difficult-191930">impractical for classical computers to solve</a>, such as cracking state-of-the-art encryption algorithms. Research in recent decades has shown that quantum computers have the potential to solve some of these problems. If a quantum computer can be built that actually does solve one of these problems, it will have demonstrated quantum advantage.</p>
<p>I am <a href="https://scholar.google.com/citations?user=2J2t64gAAAAJ&hl=en">a physicist</a> who studies quantum information processing and the control of quantum systems. I believe that this frontier of scientific and technological innovation not only promises groundbreaking advances in computation but also represents a broader surge in quantum technology, including significant advancements in quantum cryptography and quantum sensing.</p>
<h2>The source of quantum computing’s power</h2>
<p>Central to quantum computing is the quantum bit, or <a href="https://quantumatlas.umd.edu/entry/qubit/">qubit</a>. Unlike classical bits, which can only be in states of 0 or 1, a qubit can be in any state that is some combination of 0 and 1. This state of neither just 1 or just 0 is known as a <a href="https://quantumatlas.umd.edu/entry/superposition/">quantum superposition</a>. With every additional qubit, the number of states that can be represented by the qubits doubles. </p>
<p>This property is often mistaken for the source of the power of quantum computing. Instead, it comes down to an intricate interplay of superposition, <a href="https://encyclopedia2.thefreedictionary.com/Quantum+Interference">interference</a> and <a href="https://theconversation.com/nobel-winning-quantum-weirdness-undergirds-an-emerging-high-tech-industry-promising-better-ways-of-encrypting-communications-and-imaging-your-body-191929">entanglement</a>.</p>
<p>Interference involves manipulating qubits so that their states combine constructively during computations to amplify correct solutions and destructively to suppress the wrong answers. Constructive interference is what happens when the peaks of two waves – like sound waves or ocean waves – combine to create a higher peak. Destructive interference is what happens when a wave peak and a wave trough combine and cancel each other out. Quantum algorithms, which are few and difficult to devise, set up a sequence of interference patterns that yield the correct answer to a problem.</p>
<p>Entanglement establishes a uniquely quantum correlation between qubits: The state of one cannot be described independently of the others, no matter how far apart the qubits are. This is what Albert Einstein famously dismissed as “spooky action at a distance.” Entanglement’s collective behavior, orchestrated through a quantum computer, enables computational speed-ups that are beyond the reach of classical computers.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/jHoEjvuPoB8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The ones and zeros – and everything in between – of quantum computing.</span></figcaption>
</figure>
<h2>Applications of quantum computing</h2>
<p>Quantum computing has a range of potential uses where it can outperform classical computers. In cryptography, quantum computers pose both an opportunity and a challenge. Most famously, they have the <a href="https://theconversation.com/is-quantum-computing-a-cybersecurity-threat-107411">potential to decipher current encryption algorithms</a>, such as the widely used <a href="https://www.britannica.com/topic/RSA-encryption">RSA scheme</a>. </p>
<p>One consequence of this is that today’s encryption protocols need to be reengineered to be resistant to future quantum attacks. This recognition has led to the burgeoning field of <a href="https://www.nist.gov/programs-projects/post-quantum-cryptography">post-quantum cryptography</a>. After a long process, the National Institute of Standards and Technology recently selected four quantum-resistant algorithms and has begun the process of readying them so that organizations around the world can use them in their encryption technology.</p>
<p>In addition, quantum computing can dramatically speed up quantum simulation: the ability to predict the outcome of experiments operating in the quantum realm. Famed physicist Richard Feynman <a href="https://doi.org/10.1007/BF02650179">envisioned this possibility</a> more than 40 years ago. Quantum simulation offers the potential for considerable advancements in chemistry and materials science, aiding in areas such as the intricate modeling of molecular structures for drug discovery and enabling the discovery or creation of materials with novel properties. </p>
<p>Another use of quantum information technology is <a href="https://doi.org/10.1103/RevModPhys.89.035002">quantum sensing</a>: detecting and measuring physical properties like electromagnetic energy, gravity, pressure and temperature with greater sensitivity and precision than non-quantum instruments. Quantum sensing has myriad applications in fields such as <a href="https://www.azoquantum.com/Article.aspx?ArticleID=444">environmental monitoring</a>, <a href="https://doi.org/10.1038/s41586-021-04315-3">geological exploration</a>, <a href="https://doi.org/10.1038/s42254-023-00558-3">medical imaging</a> and <a href="https://www.defenseone.com/ideas/2022/06/quantum-sensorsunlike-quantum-computersare-already-here/368634/">surveillance</a>.</p>
<p>Initiatives such as the development of a quantum internet that interconnects quantum computers are crucial steps toward bridging the quantum and classical computing worlds. This network could be secured using quantum cryptographic protocols such as quantum key distribution, which enables ultra-secure communication channels that are protected against computational attacks – including those using quantum computers.</p>
<p>Despite a growing application suite for quantum computing, developing new algorithms that make full use of the quantum advantage – in particular <a href="https://journals.aps.org/prxquantum/pdf/10.1103/PRXQuantum.3.030101">in machine learning</a> – remains a critical area of ongoing research.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a metal apparatus with green laser light in the background" src="https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A prototype quantum sensor developed by MIT researchers can detect any frequency of electromagnetic waves.</span>
<span class="attribution"><a class="source" href="https://news.mit.edu/2022/quantum-sensor-frequency-0621">Guoqing Wang</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<h2>Staying coherent and overcoming errors</h2>
<p>The quantum computing field faces significant hurdles in hardware and software development. Quantum computers are highly sensitive to any unintentional interactions with their environments. This leads to the phenomenon of decoherence, where qubits rapidly degrade to the 0 or 1 states of classical bits. </p>
<p>Building large-scale quantum computing systems capable of delivering on the promise of quantum speed-ups requires overcoming decoherence. The key is developing effective methods of suppressing and correcting quantum errors, <a href="http://www.cambridge.org/9780521897877">an area my own research is focused on</a>.</p>
<p>In navigating these challenges, numerous quantum hardware and software startups have emerged alongside well-established technology industry players like Google and IBM. This industry interest, combined with significant investment from governments worldwide, underscores a collective recognition of quantum technology’s transformative potential. These initiatives foster a rich ecosystem where academia and industry collaborate, accelerating progress in the field.</p>
<h2>Quantum advantage coming into view</h2>
<p>Quantum computing may one day be as disruptive as the arrival of <a href="https://memberservices.theconversation.com/newsletters/?nl=ai">generative AI</a>. Currently, the development of quantum computing technology is at a crucial juncture. On the one hand, the field has already shown early signs of having achieved a narrowly specialized quantum advantage. <a href="https://www.nature.com/articles/s41586-019-1666-5">Researchers at Google</a> and later a <a href="https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.127.180501">team of researchers in China</a> demonstrated quantum advantage <a href="https://doi.org/10.1038/s41534-023-00703-x">for generating a list of random numbers</a> with certain properties. My research team demonstrated a quantum speed-up <a href="https://doi.org/10.1103/PhysRevLett.130.210602">for a random number guessing game</a>.</p>
<p>On the other hand, there is a tangible risk of entering a “quantum winter,” a period of reduced investment if practical results fail to materialize in the near term.</p>
<p>While the technology industry is working to deliver quantum advantage in products and services in the near term, academic research remains focused on investigating the fundamental principles underpinning this new science and technology. This ongoing basic research, fueled by enthusiastic cadres of new and bright students of the type I encounter almost every day, ensures that the field will continue to progress.</p><img src="https://counter.theconversation.com/content/213306/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Daniel Lidar receives funding from the NSF, DARPA, ARO, and DOE.</span></em></p>Several companies have made quantum computers, but these early models have yet to demonstrate quantum advantage: the ability to outstrip ordinary supercomputers.Daniel Lidar, Professor of Electrical Engineering, Chemistry, and Physics & Astronomy, University of Southern CaliforniaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2176792023-11-16T23:59:02Z2023-11-16T23:59:02ZWhat is LockBit, the cybercrime gang hacking some of the world’s largest organisations?<p>While ransomware incidents have been occurring for more than 30 years, only in the last decade has the term “ransomware” appeared regularly in popular media. Ransomware is a type of malicious software that blocks access to computer systems or encrypts files until a ransom is paid.</p>
<p>Cybercriminal gangs have adopted ransomware as a get-rich-quick scheme. Now, in the era of “ransomware as a service”, this has become a prolific and highly profitable tactic. Providing ransomware as a service means groups benefit from affiliate schemes where commission is paid for successful ransom demands.</p>
<p>Although only one of the many gangs operating, LockBit has been increasingly visible, with several high-profile victims recently appearing on the group’s website.</p>
<p>So what is LockBit? Who has fallen victim to them? And how can we protect ourselves from them?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/international-ransomware-gangs-are-evolving-their-techniques-the-next-generation-of-hackers-will-target-weaknesses-in-cryptocurrencies-211233">International ransomware gangs are evolving their techniques. The next generation of hackers will target weaknesses in cryptocurrencies</a>
</strong>
</em>
</p>
<hr>
<h2>What, or who, is LockBit?</h2>
<p>To make things confusing, the term LockBit refers to both the malicious software (malware) and to the group that created it.</p>
<p>LockBit <a href="https://www.kaspersky.com/resource-center/threats/lockbit-ransomware">first gained attention in 2019</a>. It’s a form of malware deliberately designed to be secretly deployed inside organisations, to find valuable data and steal it.</p>
<p>But rather than simply stealing the data, LockBit is a form of ransomware. Once the data has been copied, it is encrypted, rendering it inaccessible to the legitimate users. This data is then held to ransom – pay up, or you’ll never see your data again.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1723850461898281180"}"></div></p>
<p>To add further incentive for the victim, if the ransom is not paid, they are threatened with publication of the stolen data (often described as double extortion). This threat is reinforced with a countdown timer on LockBit’s blog on <a href="https://theconversation.com/explainer-what-is-the-dark-web-46070">the dark web</a>.</p>
<p>Little is known about the LockBit group. Based on their website, the group doesn’t have a specific political allegiance. Unlike some other groups, they also don’t limit the number of affiliates:</p>
<blockquote>
<p>We are located in the Netherlands, completely apolitical and only interested in money. We always have an unlimited amount of affiliates, enough space for all professionals. It does not matter what country you live in, what types of language you speak, what age you are, what religion you believe in, anyone on the planet can work with us at any time of the year.</p>
</blockquote>
<p>Notably, LockBit have rules for their affiliates. Examples of forbidden targets (victims) include:</p>
<ul>
<li>critical infrastructure</li>
<li>institutions where damage to the files could lead to death (such as hospitals)</li>
<li>post-Soviet countries such as Armenia, Belarus, Estonia, Georgia, Kazakhstan, Kyrgyzstan, Latvia, Lithuania, Moldova, Russia, Tajikistan, Turkmenistan, Ukraine and Uzbekistan.</li>
</ul>
<p>Other ransomware providers have also claimed they won’t target institutions like hospitals – but this doesn’t guarantee victim immunity. Earlier this year a <a href="https://www.theregister.com/2023/01/04/lockbit_sickkids_ransomware/">Canadian hospital was a victim of LockBit</a>, triggering the group behind LockBit to post an apology, offer free decryption tools and allegedly expel the affiliate who hacked the hospital. </p>
<p>While rules may be in place, there is always potential for rogue users to <a href="https://www.scmagazine.com/analysis/ransomware-groups-dont-abide-by-promises-not-to-target-healthcare">target forbidden organisations</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1609857321315835906"}"></div></p>
<p>The final rule in the list above is an interesting exception. According to the group, these countries are off limits because a high proportion of the group’s members were “born and grew up in the Soviet Union”, despite now being “located in the Netherlands”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/putins-russia-people-increasingly-identify-with-the-soviet-union-heres-what-that-means-181129">Putin's Russia: people increasingly identify with the Soviet Union – here's what that means</a>
</strong>
</em>
</p>
<hr>
<h2>Who’s been hacked by LockBit?</h2>
<p>High-profile victims include the United Kingdom’s Royal Mail and Ministry of Defence, and Japanese cycling component manufacturer Shimano. Data stolen from aerospace company Boeing was leaked just this week after the company refused to pay ransom to LockBit.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/559314/original/file-20231114-19-vcp8j5.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="LockBit website screenshot showing download links for stolen data" src="https://images.theconversation.com/files/559314/original/file-20231114-19-vcp8j5.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/559314/original/file-20231114-19-vcp8j5.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=562&fit=crop&dpr=1 600w, https://images.theconversation.com/files/559314/original/file-20231114-19-vcp8j5.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=562&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/559314/original/file-20231114-19-vcp8j5.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=562&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/559314/original/file-20231114-19-vcp8j5.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=706&fit=crop&dpr=1 754w, https://images.theconversation.com/files/559314/original/file-20231114-19-vcp8j5.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=706&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/559314/original/file-20231114-19-vcp8j5.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=706&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">LockBit’s website on the dark web is used to publish stolen data if the ransom is not paid.</span>
<span class="attribution"><span class="source">Screenshot sourced by authors.</span></span>
</figcaption>
</figure>
<p>While not yet confirmed, the recent ransomware incident experienced by the Industrial and Commercial Bank of China has been <a href="https://www.scmagazine.com/news/lockbit-takes-credit-for-ransomware-attack-on-us-subsidiary-of-chinese-bank%20https://www.scmagazine.com/news/lockbit-takes-credit-for-ransomware-attack-on-us-subsidiary-of-chinese-bank">claimed by LockBit</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1723060456888193238"}"></div></p>
<p>Since appearing on the cybercrime scene, LockBit has been linked to almost <a href="https://www.cyber.gov.au/about-us/advisories/understanding-ransomware-threat-actors-lockbit">2,000 victims in the United States alone</a>.</p>
<p>From the list of victims seen below, LockBit is clearly being used in a scatter-gun approach, with a wide variety of victims. This is not a series of planned, targeted attacks. Instead, it shows LockBit software is being used by a diverse range of criminals in a service model.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/559313/original/file-20231114-21-syppv0.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="LockBit blog screenshot showing victims with countdown timer" src="https://images.theconversation.com/files/559313/original/file-20231114-21-syppv0.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/559313/original/file-20231114-21-syppv0.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=294&fit=crop&dpr=1 600w, https://images.theconversation.com/files/559313/original/file-20231114-21-syppv0.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=294&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/559313/original/file-20231114-21-syppv0.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=294&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/559313/original/file-20231114-21-syppv0.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=369&fit=crop&dpr=1 754w, https://images.theconversation.com/files/559313/original/file-20231114-21-syppv0.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=369&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/559313/original/file-20231114-21-syppv0.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=369&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">LockBit’s blog on the dark web provides a showroom for public shaming of their victims.</span>
<span class="attribution"><span class="source">Screenshot sourced by authors.</span></span>
</figcaption>
</figure>
<h2>How we can protect ourselves</h2>
<p>In recent years, ransomware as a service (RaaS for short) has become popular.</p>
<p>Just as organisations use software-as-a-service providers – such as licensing for office tools like Microsoft 365, or accounting software for payroll – malicious services are providing tools for cybercriminals.</p>
<p>Ransomware as a service enables an inexperienced criminal to deliver a ransomware campaign to multiple targets quickly and efficiently – often at minimal cost and usually on a profit-sharing basis.</p>
<p>The RaaS platform handles the malware management, data extraction, victim negotiation and payment handling, effectively outsourcing criminal activities.</p>
<p>The process is so well developed, such groups even provide guidelines on how to become an affiliate, and what benefits one will gain. With a 20% commission of the ransom being paid to LockBit, this system can generate significant revenue for the group – including the deposit of 1 Bitcoin (approximately A$58,000) required from new users.</p>
<p>While ransomware is a growing concern around the globe, good cybersecurity practices can help. Updating and patching our systems, good password and account management, network monitoring and reacting to unusual activity can all help to minimise the likelihood of any compromise – or at least limit its extent.</p>
<p>For now, whether or not to pay a ransom is a matter of preference and ethics for each organisation. But if we can make it more difficult to get in, criminal groups will simply shift to easier targets.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/australia-is-considering-a-ban-on-cyber-ransom-payments-but-it-could-backfire-heres-another-idea-194516">Australia is considering a ban on cyber ransom payments, but it could backfire. Here's another idea</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/217679/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Prolific and highly profitable, LockBit provides ransomware as a service. Aspiring cybercriminals sign up to the scheme, and the group takes a cut. Here’s how it works.Jennifer Medbury, Lecturer in Intelligence and Security, Edith Cowan UniversityPaul Haskell-Dowland, Professor of Cyber Security Practice, Edith Cowan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2125552023-09-27T09:08:19Z2023-09-27T09:08:19ZDoes AI have a right to free speech? Only if it supports our right to free thought<figure><img src="https://images.theconversation.com/files/546682/original/file-20230906-29-kbyfn8.jpg?ixlib=rb-1.1.0&rect=40%2C0%2C8905%2C6143&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/artificial-intelligence-entity-using-voice-communicate-2106291758">ArtemisDiana / Shutterstock</a></span></figcaption></figure><p>The world has witnessed breathtaking advances in generative artificial intelligence (AI), with ChatGPT being one of the best known examples. To prevent harm and misuse of the technology, politicians are now considering <a href="https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach">regulating AI</a>. Yet they face an overlooked barrier: AI may have a right to free speech.</p>
<p>Under international law, humans possess <a href="https://digitallibrary.un.org/record/182777?ln=en">an inviolable right to freedom of thought</a>. As part of this, governments <a href="https://www.ohchr.org/en/documents/thematic-reports/a76380-interim-report-special-rapporteur-freedom-religion-or-belief">have a duty</a> to create an environment where people can think freely.</p>
<p>As we’ve seen with ChatGPT, AI can support our thinking, providing information and offering answers to our questions. This has led some to argue that our right to think freely may require giving AI a right to speak freely.</p>
<h2>Free thought needs free speech</h2>
<p>Recent <a href="https://time.com/6278220/protecting-ai-generated-speech-first-amendment/">articles</a>, <a href="https://www.journaloffreespeechlaw.org/volokhlemleyhenderson.pdf">papers</a> and <a href="https://www.cambridge.org/ie/universitypress/subjects/law/constitutional-and-administrative-law/robotica-speech-rights-and-artificial-intelligence?format=HB&isbn=9781108428064">books</a> from the US have made the case that AI has a right to free speech. </p>
<p>Corporations, like AI systems, are not people. Yet the <a href="https://supreme.justia.com/cases/federal/us/558/310/">US supreme court</a> has ruled that government should not suppress corporations’ political speech. This is because the <a href="https://constitution.congress.gov/constitution/amendment-1/">first amendment</a> protects Americans’ <a href="https://supreme.justia.com/cases/federal/us/558/310/">freedom to think for themselves</a>.</p>
<p>Free thought, says the US supreme court, requires us to hear from “<a href="https://supreme.justia.com/cases/federal/us/326/1/">diverse and antagonistic sources</a>”. The US government telling people where to get their information would be an unlawful use of “<a href="https://supreme.justia.com/cases/federal/us/558/310/">censorship to control thought</a>”. So corporations’ free speech is believed to create an environment where individuals are free to think.</p>
<p>The same principle could extend to AI. The US supreme court says that protecting speech “<a href="https://supreme.justia.com/cases/federal/us/435/765/">does not depend upon the identity of its source</a>”. Instead, the key criterion for protecting speech is that the speaker, whether an individual, corporation or AI, contributes to the marketplace of ideas. </p>
<h2>AI and misinformation</h2>
<p>Yet, an unthinking application of <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3922565">free speech law to AI</a> could be damaging. Giving AI free speech rights could actually harm our ability to think freely. We have a term, sophist, for those who use language to persuade us of falsehoods. While AI super-soldiers would be dangerous, AI super-sophists could be much worse.</p>
<figure class="align-center ">
<img alt="Woman looking at her smartphone." src="https://images.theconversation.com/files/547024/original/file-20230907-23101-m1kr8l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/547024/original/file-20230907-23101-m1kr8l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/547024/original/file-20230907-23101-m1kr8l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/547024/original/file-20230907-23101-m1kr8l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/547024/original/file-20230907-23101-m1kr8l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/547024/original/file-20230907-23101-m1kr8l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/547024/original/file-20230907-23101-m1kr8l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">AI could, in theory, flood the internet with misinformation, making it difficult to tell fact from fiction.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/upset-confused-african-woman-holding-cellphone-1361068583">fizkes / Shutterstock</a></span>
</figcaption>
</figure>
<p>An unconstrained AI might pollute the information landscape with misinformation, flooding us with “<a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">propaganda and untruth</a>”. But punishing falsehoods could easily <a href="https://global.oup.com/academic/product/liars-9780197545119">stray into censorship</a>. The best antidote to AI’s <a href="https://supreme.justia.com/cases/federal/us/274/357/">falsehoods and fallacies</a> could be more AI speech that <a href="https://time.com/6278220/protecting-ai-generated-speech-first-amendment/">counters misinformation</a>. </p>
<p>AI could also use its knowledge of human thinking to systematically attack <a href="https://www.simonandschuster.com/books/Freethinking/Simon-McCarthy-Jones/9780861544578">what makes our thought free</a>. It could control our attention, discourage pause for reflection, pervert our reasoning, and intimidate us into silence. Our minds could therefore become moulded by machines.</p>
<p>This could be the wake-up call we need to spur a renaissance in human thinking. Humans have been described as “<a href="https://www.taylorfrancis.com/chapters/edit/10.4324/9781315658353-13/humans-cognitive-misers-means-great-rationality-debate-keith-stanovich">cognitive misers</a>”, which means we only really think when we need to. A free-speaking AI could force us to think more deeply and deliberately about what is true.</p>
<p>However, the huge quantities of speech that AI can produce could give it an oversized influence on society. Currently, the US supreme court views silencing some speakers to hear others better as “<a href="https://supreme.justia.com/cases/federal/us/424/1/">wholly foreign to the first amendment</a>”. But restricting the speech of machines might be necessary to allow human speech and thought to flourish.</p>
<h2>Proposed regulation of AI</h2>
<p>Both <a href="https://www.cambridge.org/core/books/abs/cambridge-handbook-of-the-law-of-algorithms/artificial-minds-in-first-amendment-borderlands/C60AF9211B5E72991DB73D6BD2D5155B">free speech law</a> and AI regulation must consider their impact on free thought. Take the <a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence">European Union’s draft AI act</a> and its proposed regulation of generative AI such as ChatGPT.</p>
<p>Firstly, this act requires AI-generated content to be disclosed. Knowing content comes from an AI, rather than a person, might help us evaluate it more clearly – promoting free thought.</p>
<p>But permitting some anonymous AI speech could help our thinking. AI’s owners may experience less public pressure to censor legal but controversial AI speech if such speech was anonymous. AI anonymity could also have the effect of making us judge AI speech on its merits rather than <a href="https://www.cambridge.org/core/books/abs/cambridge-handbook-of-the-law-of-algorithms/artificial-minds-in-first-amendment-borderlands/C60AF9211B5E72991DB73D6BD2D5155B">reflexively dismissing it as “bot speech”</a>. </p>
<p>Secondly, the EU act requires companies to design their AI models to avoid generating illegal content, which in those countries includes hate speech. But this could prevent both legal and illegal speech being generated. European hate speech laws already cause both legal and illegal online comments to be deleted, <a href="http://justitia-int.org/en/new-report-digital-freedom-of-speech-and-social-media/">according to a think tank report</a>.</p>
<p>Holding companies liable for what their AI produces could also incentivise them to unnecessarily restrict what it says. The US’s section 230 law shields social media companies from much legal liability for their users’ speech, but <a href="https://www.journaloffreespeechlaw.org/perault.pdf">may not protect AI’s speech</a>. We may need new laws to insulate corporations from such pressures.</p>
<p>Finally, the act requires companies to publish summaries of copyrighted data used to train (improve) AI. The EU wants AI to share its library record. This could help us evaluate AI’s likely biases.</p>
<p>Yet humans’ reading records are protected for good reason. If we thought others could know what we read, we might be likely to shy away from controversial but potentially useful texts. Similarly, revealing AI’s reading list might pressurise tech companies not to train AI with legal but controversial material. This could limit AI’s speech and our free thought.</p>
<h2>Thinking with technology</h2>
<p>As Aza Raskin from the Center for Humane Technology <a href="https://www.humanetech.com/podcast/the-ai-dilemma">points out</a>, threats from new technologies can require us to develop new rights. <a href="https://www.humanetech.com/podcast/the-ai-dilemma">Raskin explains how</a> the ability of computers to preserve our words led to a new <a href="https://gdpr.eu/right-to-be-forgotten/">right to be forgotten</a>. AI may force us to <a href="https://www.simonandschuster.com/books/Freethinking/Simon-McCarthy-Jones/9780861544578">elaborate and reinvent our right to freedom of thought</a>.</p>
<p>Moving forward, we need what the legal scholar Marc Blitz terms “<a href="https://www.cambridge.org/core/books/abs/cambridge-handbook-of-the-law-of-algorithms/artificial-minds-in-first-amendment-borderlands/C60AF9211B5E72991DB73D6BD2D5155B">a right to think with technology</a>” – freedom to interact with AI and computers, using them to inform our thinking. Yet such thinking may not be free if AI is compelled to be “<a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">safe … aligned … and loyal</a>”, as tech experts recently demanded in a petition to pause AI development. </p>
<p>Granting AI free speech rights would both support and undermine our freedom of thought. This points to the need for AI regulation. Yet such regulatory action must clearly show how it complies with our inviolable right to freedom of thought, if we are to remain in control of our lives.</p><img src="https://counter.theconversation.com/content/212555/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Simon McCarthy-Jones receives funding from the European Union’s Horizon 2020 program via a Marie Skłodowska-Curie Actions Innovative Training Network.</span></em></p>If we decide that AI helps us think freely, we may need to give it rights too.Simon McCarthy-Jones, Associate Professor in Clinical Psychology and Neuropsychology, Trinity College DublinLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2031082023-08-07T12:44:10Z2023-08-07T12:44:10ZComputer science can help farmers explore alternative crops and sustainable farming methods<figure><img src="https://images.theconversation.com/files/540847/original/file-20230802-23327-tn8hq8.jpg?ixlib=rb-1.1.0&rect=19%2C4%2C3176%2C2122&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Chick peas intercropped with flax on a farm in Stanford, Mont.</span> <span class="attribution"><a class="source" href="https://flic.kr/p/2hKjCG2">USDA NRCS Montana</a></span></figcaption></figure><p>Humans have physically reconfigured <a href="https://ourworldindata.org/land-use">half of the world’s land</a> to grow just eight staple crops: maize (corn), soy, wheat, rice, cassava, sorghum, sweet potato and potato. They account for the vast majority of calories that people around the world consume. As global population rises, there’s pressure to <a href="https://www.theguardian.com/global-development/2022/nov/15/can-the-world-feed-8bn-people-sustainably">expand production even further</a>.</p>
<p>Many experts argue that further expanding modern industrialized agriculture – which relies heavily on synthetic fertilizer, chemical pesticides and high-yield seeds – <a href="https://research.wri.org/wrr-food">isn’t the right way</a> to <a href="https://www.unep.org/news-and-stories/story/10-things-you-should-know-about-industrial-farming">feed a growing world population</a>. In their view, this approach <a href="https://theconversation.com/us-agriculture-needs-a-21st-century-new-deal-112757">isn’t sustainable ecologically or economically</a>, and farmers and scientists alike <a href="https://theconversation.com/regenerative-agriculture-can-make-farmers-stewards-of-the-land-again-110570">feel trapped</a> within this system. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/i6teBcfKpik?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Corn’s evolution into a global commodity shows how industrialized agriculture has transformed farming.</span></figcaption>
</figure>
<p>How can societies develop a food system that meets their needs and is also more healthy and diverse? It has proved hard to scale up alternative methods, such as organic farming, as broadly as industrial agriculture.</p>
<p>In a recent study, we considered this problem from our perspectives as a <a href="https://scholar.google.com/citations?user=Ikz6_Y0AAAAJ&hl=en">computer scientist</a> and a <a href="https://scholar.google.ca/citations?user=cJQv8WsAAAAJ&hl=en">crop scientist</a>. We and our colleagues <a href="https://scholar.google.com/citations?user=O7xJ4mcAAAAJ&hl=en">Bryan Runck</a>, <a href="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22adam+streed%22&btnG=">Adam Streed</a>, <a href="https://scholar.google.com/citations?user=wDtsUmUAAAAJ&hl=en">Diane R. Wang</a> and <a href="https://scholar.google.com/citations?user=ukiVGLsAAAAJ&hl=en">Patrick M. Ewing</a> proposed a way to rethink <a href="https://doi.org/10.1093/pnasnexus/pgad084">how agricultural systems are designed and implemented</a>, using a central idea from computer science - abstraction - that summarizes data and concepts and organizes them computationally, so we can analyze and act upon them without having to constantly examine their internal details.</p>
<h2>Big output, big impacts</h2>
<p>Modern agriculture intensified over just a few decades in the mid-20th century – a blink of an eye in human history. <a href="https://doi.org/10.1073/pnas.0912953109">Technological improvements</a> led the way, including the development of <a href="https://www.britannica.com/technology/Haber-Bosch-process">synthetic fertilizer</a> and statistical methods that improved plant breeding. </p>
<p>These advances made it possible for farms to produce much larger quantities of food, but at the expense of the environment. Large-scale agriculture has <a href="https://www.ipcc.ch/site/assets/uploads/sites/4/2020/02/SPM_Updated-Jan20.pdf">helped drive climate change</a>, polluted lakes and bays with <a href="https://theconversation.com/to-reduce-harmful-algal-blooms-and-dead-zones-the-us-needs-a-national-strategy-for-regulating-farm-pollution-186286">nutrient runoff</a> and <a href="https://www.unep.org/news-and-stories/press-release/our-global-food-system-primary-driver-biodiversity-loss">accelerated species losses</a> by turning natural landscapes into monoculture crop fields.</p>
<p>Many U.S. farmers and agricultural researchers would like to grow a wider range of crops and use more sustainable farming methods. But it’s hard for them to figure out what new systems could perform well, especially in a changing climate. Lower-impact farming systems often require deep local knowledge, plus an encyclopedic understanding of plants, weather and climate modeling, geology and more. </p>
<p>That’s where our new approach comes in.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/540871/original/file-20230802-26048-ybmbaf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A field of soybean plants, half harvested, stretches to the horizon." src="https://images.theconversation.com/files/540871/original/file-20230802-26048-ybmbaf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/540871/original/file-20230802-26048-ybmbaf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=416&fit=crop&dpr=1 600w, https://images.theconversation.com/files/540871/original/file-20230802-26048-ybmbaf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=416&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/540871/original/file-20230802-26048-ybmbaf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=416&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/540871/original/file-20230802-26048-ybmbaf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=522&fit=crop&dpr=1 754w, https://images.theconversation.com/files/540871/original/file-20230802-26048-ybmbaf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=522&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/540871/original/file-20230802-26048-ybmbaf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=522&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Monoculture farming, like this Iowa soybean field shown during harvest, has contributed to the decline of bees and other pollinators by reducing their food sources.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/an-aerial-view-from-a-drone-shows-soybean-field-as-it-is-news-photo/1181120200">Joe Raedle/Getty Images</a></span>
</figcaption>
</figure>
<h2>Farms as state spaces</h2>
<p>When computer scientists think about complex problems, they often use a concept called a <a href="https://en.wikipedia.org/wiki/State_space_(computer_science)">state space</a>. This approach mathematically represents all of the possible ways in which a system can be configured. Moving through the space entails making choices, and those choices change the state of the system, for better or worse.</p>
<p>As an example, consider a game of chess with a board and two players. Each configuration of the board at a moment in time is a single state of the game. When a player makes a move, it shifts the game to another state.</p>
<p>The whole game can be described by its “state space” – all possible states the game could be in through valid moves the players make. During the game, each player is searching for states that are better for them.</p>
<p>We can think of an agricultural system as a state space in a particular ecosystem. A farm and its layout of plant species at any moment in time represent one state in that state space. The farmer is searching for better states and trying to avoid bad ones.</p>
<p>Both humans and nature shift the farm from one state to another. On any given day, the farmer might do a dozen different things on the land, such as tilling, planting, weeding, harvesting or adding fertilizer. Nature causes minor state transitions, such as plants growing and rain falling, and much more dramatic state transitions during natural disasters such as floods or wildfires. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/-NZIvvhGlR0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Climate change is altering the zones in which major crops like corn and wheat can be grown, reducing yields in some cases and increasing them in others.</span></figcaption>
</figure>
<h2>Finding synergies</h2>
<p>Viewing an agricultural system as a state space makes it possible to broaden choices for farmers beyond the limited options today’s farming systems offer. </p>
<p>Individual farmers don’t have the time or ability to do trial and error for years on their land. But a computing system can draw on agricultural knowledge from many different environments and schools of thought to play a metaphorical chess game with nature that helps farmers identify the best options for their land. </p>
<p>Conventional agriculture limits farmers to a few choices of plant species, farming methods and inputs. Our framework makes it possible to consider higher-level strategies, such as growing multiple crops together or finding management techniques that are best suited to a particular piece of land. Users can search the state space to consider what mix of methods, species and locales could achieve those goals. </p>
<p>For example, if a scientist wants to test five crop rotations – raising planned sequences of crops on the same fields – that each last four years, growing seven plant species, that represents 721 potential rotations. Our approach could use information from <a href="https://doi.org/10.1093/biosci/biac021">long-term ecological research</a> to help find the best potential systems to test. </p>
<p>One area where we see great potential is <a href="https://agclass.nal.usda.gov/vocabularies/nalt/concept?uri=https://lod.nal.usda.gov/nalt/6581">intercropping</a> – growing different plants in a mixture or close together. Many combinations of specific plants have long been known to grow well together, with each plant helping the others in some way.</p>
<p>The most familiar example is the “three sisters” – maize, squash and beans – developed by <a href="https://www.nal.usda.gov/collections/stories/three-sisters">Indigenous farmers of the Americas</a>. Corn stalks act as trellises for climbing bean vines, while squash leaves shade the ground, keeping it moist and preventing weeds from sprouting. Bacteria on the bean plants’ roots provide nitrogen, an essential nutrient, to all three plants.</p>
<p><div data-react-class="InstagramEmbed" data-react-props="{"url":"https://www.instagram.com/reel/ChpP-J7DkXG/?utm_source=ig_web_copy_link\u0026igshid=MzRlODBiNWFlZA==","accessToken":"127105130696839|b4b75090c9688d81dfd245afe6052f20"}"></div></p>
<p>Cultures throughout human history have had their own favored intercropping systems with similar synergies, such as <a href="https://doi.org/10.1002/9781119521082.ch3">tumeric and mango</a> or <a href="https://doi.org/10.1016/j.agee.2020.107175">millet, cowpea and ziziphus, commonly known as red date</a>. And new work on <a href="https://theconversation.com/how-shading-crops-with-solar-panels-can-improve-farming-lower-food-costs-and-reduce-emissions-202094">agrivoltaics</a> shows that combining solar panels and farming can work surprisingly well: The panels partially shade crops that grow underneath them, and farmers earn extra income by producing renewable energy on their land. </p>
<h2>Modeling alternative farm strategies</h2>
<p>We are working to turn our framework into software that people can use to model agriculture as state spaces. The goal is to enable users to consider alternative designs based upon their intuition, minimizing the costly trial and error that’s now required to test out new ideas in farming. </p>
<p>Today’s approaches largely model and pursue optimizations of existing, often unsustainable systems of agriculture. Our framework enables discovery of new systems of agriculture and then optimization within those new systems.</p>
<p>Users also will be able to specify their objectives to an artificial intelligence-based agent that can perform a search of the farm state space, just as it might search the state space of a chessboard to pick winning moves. </p>
<p>Modern societies have access to many more plant species and much more information about how different species and environments interact than they did a century ago. In our view, agricultural systems aren’t doing enough to leverage all that knowledge. Combining it computationally could help make agriculture more productive, healthy and sustainable in a rapidly changing world.</p><img src="https://counter.theconversation.com/content/203108/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Barath Raghavan receives funding from the National Science Foundation, the Schmidt Family Foundation and Cisco Systems. </span></em></p><p class="fine-print"><em><span>Michael Kantar receives funding from the U.S. Department of Agriculture and the National Science Foundation. </span></em></p>Conventional agriculture offers farmers few choices about which crops to grow or how to raise them. A new approach uses computing to construct better strategies with lower environmental impacts.Barath Raghavan, Associate Professor of Computer Science and Electrical and Computer Engineering, University of Southern CaliforniaMichael Kantar, Associate Professor of Tropical Plants & Soil Sciences, University of HawaiiLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2058422023-05-26T00:44:09Z2023-05-26T00:44:09ZResearchers built an analogue computer that uses water waves to forecast the chaotic future<figure><img src="https://images.theconversation.com/files/528189/original/file-20230525-15-7szay.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2598%2C1901&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Can a computer learn from the past and anticipate what will happen next, like a human? You might not be surprised to hear that some cutting-edge AI models could achieve this feat, but what about a computer that looks a little different – more like a tank of water?</p>
<p>We have built <a href="https://doi.org/10.1209/0295-5075/acd471">a small proof-of-concept computer</a> that uses running water instead of a traditional logical circuitry processor, and forecasts future events via an approach called “reservoir computing”.</p>
<p>In benchmark tests, our analogue computer did well at remembering input data and forecasting future events – and in some cases it even did better than a high-performance digital computer.</p>
<p>So how does it work?</p>
<h2>Throwing stones in the pond</h2>
<p>Imagine two kids, Alice and Bob, playing at the edge of a pond. Bob throws big and small stones into water one at a time, seemingly at random. </p>
<p>Big and small stones create water waves of different size. Alice watches the water waves created by the stones and learns to anticipate what the waves will do next – and from that, she can have an idea of which stone Bob will throw next. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/526742/original/file-20230517-17-2g9fwb.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/526742/original/file-20230517-17-2g9fwb.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/526742/original/file-20230517-17-2g9fwb.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/526742/original/file-20230517-17-2g9fwb.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/526742/original/file-20230517-17-2g9fwb.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/526742/original/file-20230517-17-2g9fwb.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/526742/original/file-20230517-17-2g9fwb.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Bob throws rocks into the pond, while Alice watches the waves and tries to predict what’s coming next.</span>
<span class="attribution"><span class="source">Yaroslav Maksymov</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p><a href="https://towardsdatascience.com/gentle-introduction-to-echo-state-networks-af99e5373c68">Reservoir computers</a> copy the reasoning process taking place in Alice’s brain. They can learn from past inputs to predict the future events.</p>
<p>Although reservoir computers were first proposed using neural networks – computer programs loosely based on the structure of neurons in the brain – they can also be built with <a href="https://link.springer.com/chapter/10.1007/978-3-540-39432-7_63">simple physical systems</a>.</p>
<p>Reservoir computers are analogue computers. An analogue computer represents data continuously, as opposed to digital computers which represent data as abruptly changing binary “zero” and “one” states. </p>
<p>Representing data in a continuous way <a href="https://collection.sciencemuseumgroup.org.uk/objects/co8428222/electronic-storm-surge-modelling-machine-storm-surge-model">enables</a> analogue computers to model certain natural events – ones that occur in a kind of unpredictable sequence called a “<a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/ima.1850010213">chaotic time series</a>” – better than a digital computer.</p>
<h2>How to make predictions</h2>
<p>To understand how we can use a reservoir computer to make predictions, imagine you have a record of daily rainfall for the past year and a bucket full of water near you. The bucket will be our “computational reservoir”. </p>
<p>We input the daily rainfall record to the bucket by means of stone. For a day of light rain, we throw a small stone; for a day of heavy rain, a big stone. For a day of no rain, we throw no rock.</p>
<p>Each stone creates waves, which then slosh around the bucket and interact with waves created by other stones. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/theres-a-way-to-turn-almost-any-object-into-a-computer-and-it-could-cause-shockwaves-in-ai-62235">There's a way to turn almost any object into a computer – and it could cause shockwaves in AI</a>
</strong>
</em>
</p>
<hr>
<p>At the end of this process, the state of the water in the bucket gives us a prediction. If the interactions between waves create large new waves, we can say our reservoir computer predicts heavy rains. But if they are small then we should expect only light rain. </p>
<p>It is also possible that the waves will cancel one another, forming a still water surface. In that case we should not expect any rain. </p>
<p>The reservoir makes a weather forecast because the waves in the bucket and rainfall patterns evolve over time following the same laws of physics. </p>
<p>So do many other natural and socio-economic processes. This means a reservoir computer can also forecast <a href="https://towardsdatascience.com/predicting-stock-prices-with-echo-state-networks-f910809d23d4">financial markets</a> and even <a href="https://pubmed.ncbi.nlm.nih.gov/26422421/">certain kinds</a> of <a href="https://www.researchgate.net/publication/308952845_Temporal_Learning_Using_Echo_State_Network_for_Human_Activity_Recognition">human activity</a>.</p>
<h2>Longer-lasting waves</h2>
<p>The “<a href="https://autobencoder.com/2021-04-05-bucket/">bucket of water</a>” reservoir computer has its limits. For one thing, the waves are short-lived. To forecast complex processes such as climate change and population growth, we need a reservoir with more durable waves.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/w-oDnvbV8mY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>One option is “solitons”. These are self-reinforcing waves that keep their shape and move for long distances.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/526932/original/file-20230518-22-tq8wpq.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A drinking fountain with water flowing down a metal slope, exhibiting waves." src="https://images.theconversation.com/files/526932/original/file-20230518-22-tq8wpq.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/526932/original/file-20230518-22-tq8wpq.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=705&fit=crop&dpr=1 600w, https://images.theconversation.com/files/526932/original/file-20230518-22-tq8wpq.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=705&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/526932/original/file-20230518-22-tq8wpq.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=705&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/526932/original/file-20230518-22-tq8wpq.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=886&fit=crop&dpr=1 754w, https://images.theconversation.com/files/526932/original/file-20230518-22-tq8wpq.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=886&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/526932/original/file-20230518-22-tq8wpq.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=886&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Our reservoir computer used solitary waves like those seen in drinking fountains.</span>
<span class="attribution"><span class="source">Ivan Maksymov</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>For our reservoir computer, we used compact soliton-like waves. You often see such waves in a bathroom sink or a drinking fountain. </p>
<p>In our computer, a thin layer of water flows over a slightly inclined metal plate. A small electric pump changes the speed of the flow and creates solitary waves. </p>
<p>We added a fluorescent material to make the water glow under ultraviolet light, to precisely measure the size of the waves. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/Zwu3KEo8f00?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>The pump plays the role of falling stones in the game played by Alice and Bob, but the solitary waves correspond to the waves on the water surface. Solitary waves move much faster and live longer than water waves in a bucket, which lets our computer process data at a higher speed. </p>
<h2>So, how does it perform?</h2>
<p>We <a href="https://doi.org/10.1209/0295-5075/acd471">tested</a> our computer’s ability to remember past inputs and to make forecasts for a benchmark set of chaotic and random data. Our computer not only executed all tasks exceptionally well but also outperformed a high-performance digital computer tasked with the same problem. </p>
<p>With my colleague <a href="https://www.swinburne.edu.au/research/our-research/access-our-research/find-a-researcher-or-supervisor/researcher-profile/?id=apototskyy">Andrey Pototsky</a>, we also created a mathematical model that enabled us to better understand the physical properties of the solitary waves.</p>
<p>Next, we plan to miniaturise our computer as a <a href="https://www.theregister.com/2021/09/15/microfluidic_processor/">microfluidic processor</a>. Water waves should be able to do computations inside a chip that operates similarly to the silicon chips used in every smartphone. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-was-the-first-computer-122164">What was the first computer?</a>
</strong>
</em>
</p>
<hr>
<p>In the future, our computer may be able to produce reliable long-term forecasts in areas such as climate change, bushfires and financial markets – with much <a href="https://www.nature.com/articles/d41586-022-03212-7">lower cost and wider availability</a> than current supercomputers.</p>
<p>Our computer is also naturally immune to cyber attacks because it does not use digital data. </p>
<p>Our <a href="https://research.csu.edu.au/our-profile/research-centres/aicf">vision</a> is that a soliton-based microfluidic reservoir computer will bring data science and machine learning to rural and remote communities worldwide. But for now, our research work continues.</p><img src="https://counter.theconversation.com/content/205842/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ivan Maksymov does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>In the future, our computer may be able to produce long-term forecasts in areas such as climate change, bushfires and financial markets – while being cheaper and more accessible than supercomputers.Ivan Maksymov, Principal Research Fellow, Charles Sturt UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2051712023-05-23T12:25:42Z2023-05-23T12:25:42ZNew approach to teaching computer science could broaden the subject’s appeal<figure><img src="https://images.theconversation.com/files/527051/original/file-20230518-23-xsgvbi.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Language arts students can program chatbots for literary characters.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/side-view-of-youthful-african-american-schoolboy-royalty-free-image/1425235236">shironosov/iStock/Getty Images Plus</a></span></figcaption></figure><p>Despite <a href="https://www.bls.gov/ooh/computer-and-information-technology/computer-and-information-research-scientists.htm#tab-1">growing demand for computer science skills</a> in professional careers and many areas of life, K-12 schools <a href="https://www.eschoolnews.com/steam/2023/02/23/what-is-computer-science-education-lacking/">struggle to teach</a> computer science to the next generation.</p>
<p>However, a new approach to computer science education – called <a href="https://www.fierceeducation.com/teaching-learning/teaching-computational-thinking-essential-future-college-students">integrated computing</a> – addresses the main barriers that schools face when adding computer science education. These barriers include a <a href="https://news.gallup.com/reports/196379/trends-state-computer-science-schools.aspx">lack of qualified computer science teachers</a>, a lack of funds and a focus on courses tied to standardized tests.</p>
<p>Integrated computing teaches computer science skills like programming and computer literacy within traditional courses. For example, students can use integrated computing activities to <a href="https://youtu.be/KG_JqpmmkdQ">create geometric patterns in math</a>, <a href="https://youtu.be/x5w6x7f33Wk">simulate electromagnetic waves in science</a> and <a href="https://youtu.be/654BOJwAWCg">create chatbots for literary characters</a> in language arts. </p>
<p>As a <a href="https://education.gsu.edu/profile/lauren-margulieux/">professor of learning technologies</a>, I have been <a href="https://scholar.google.com/citations?user=YGV0Y24AAAAJ&hl=en&oi=sra">designing integrated computing activities</a> for K-12 students for the past five years. I work with faculty and students in teacher training programs to <a href="http://www.doi.org/10.26716/jcsi.2022.11.15.35">create and test integrated computing activities</a> across all academic subjects. </p>
<p>In <a href="https://laurenmarg.com/research/">my research</a>, I have found that integrated computing solves three major hurdles to teaching computer science education in K-12 schools.</p>
<h2>Challenges to teaching computer science</h2>
<p>Fitting a new academic discipline into an <a href="https://www.oecd-ilibrary.org/sites/0ebc645c-en/index.html?itemId=/content/component/0ebc645c-en">already crowded curriculum</a> can be a challenge. Integrated computing allows computer science education to become part of learning in other classes, the way reading skills are also used in science, math and language arts classes. </p>
<p>Teacher knowledge is <a href="https://doi.org/10.1080/07380569.2023.2178868">another difficulty when it comes to teaching computer science</a> in K-12 schools. While people who specialize in computer science are often recruited to more lucrative careers than teaching, integrated computing develops all teachers’ computer science knowledge. Teachers do not need to become computer science experts to teach computer literacy and programming skills to their students. </p>
<figure class="align-center ">
<img alt="Teacher holds tablet while working in classroom" src="https://images.theconversation.com/files/527129/original/file-20230518-19-2wsuw7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/527129/original/file-20230518-19-2wsuw7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/527129/original/file-20230518-19-2wsuw7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/527129/original/file-20230518-19-2wsuw7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/527129/original/file-20230518-19-2wsuw7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/527129/original/file-20230518-19-2wsuw7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/527129/original/file-20230518-19-2wsuw7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Teachers do not need a computer science degree to incorporate computing into their classrooms.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/indian-teacher-using-digital-tablet-in-classroom-royalty-free-image/526297603">LWA/Dann Tardif/DigitalVision Collection/Getty Images</a></span>
</figcaption>
</figure>
<p>In fact, the most surprising result of my research is how quickly teachers learn to teach integrated computing activities. In about two hours, <a href="https://www.doi.org/10.26716/jcsi.2022.11.15.35">teachers can use a pre-made computer science lesson</a> in their classrooms. In the future, I will teach them to use artificial intelligence to create their own lessons for their students. For example, a science teacher recently asked me how she could create a data analysis activity for her class. AI tools would allow her to <a href="https://www.ironhack.com/us/en/blog/chatgpt-for-data-analysts">quickly design the technical aspects</a> of this activity. </p>
<p>And finally, integrated computing also addresses students’ reluctance to take elective computer science classes when they have little knowledge of computer science. In 2022, over half of U.S. public high schools offered computer science, but just <a href="https://www.edweek.org/technology/computer-science-education-is-gaining-momentum-but-some-say-not-fast-enough/2022/09">6% of students</a> took these classes. Students who do take computer science in high school typically have had <a href="https://doi.org/10.2190/9LE6-MBXA-JDPG-UG90">early exposure to computer science</a>. Integrated computing can give all students early exposure to computer science, which I believe will increase the number of students who take computer science courses later in school. </p>
<h2>Computer science for everyone</h2>
<p>Early exposure to computer science in school is especially important for students from groups <a href="https://www.brookings.edu/research/exploring-the-state-of-computer-science-education-amid-rapid-policy-expansion/">underrepresented in computer science</a>. A <a href="https://advocacy.code.org/stateofcs">2022 report</a> from Code.org, a nonprofit that advocates for more computer science education in K-12 schools, found that students who are Latino, female or from low-income or rural areas are <a href="https://www.edweek.org/technology/computer-science-education-is-gaining-momentum-but-some-say-not-fast-enough/2022/09">less likely</a> to be enrolled in foundational computer science courses.</p>
<p>Teachers who want to build their computer science knowledge and apply it to their classroom can try these free self-paced, online <a href="https://gavirtualpd.catalog.instructure.com/browse/computerscience">integrated computing courses</a> that I developed, and which are tied to micro-credentials. Also, this sortable list of <a href="https://integratedcomputing.org/">integrated computing activities</a> provides free lesson plans. The activities require only a computer – no prior knowledge is needed, and young learners can complete them outside of class, too.</p>
<p>Integrated computing provides a path to increase computer literacy for all K-12 students. As technology advances at an increasing rate, I believe schools must take care that our young people do not fall behind.</p><img src="https://counter.theconversation.com/content/205171/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lauren Margulieux receives funding from Snap, Inc., Google, the National Science Foundation, and the US Department of Education. </span></em></p>Integrated computing enables teachers to incorporate basic programming skills into K-12 students’ regular math, science and language arts classes.Lauren Margulieux, Associate Professor of Learning Technologies, Georgia State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1991272023-04-13T14:38:43Z2023-04-13T14:38:43ZVenn: the man behind the famous diagrams – and why his work still matters today<figure><img src="https://images.theconversation.com/files/507859/original/file-20230202-16-eghiz9.png?ixlib=rb-1.1.0&rect=0%2C2%2C1974%2C1105&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> <span class="attribution"><span class="license">Author provided</span></span></figcaption></figure><p>April 2023 marks the 100th anniversary of the death of mathematician and philosopher John Venn. You may well be familiar with Venn diagrams – the ubiquitous pictures of typically two or three intersecting circles, illustrating the relationships between two or three collections of things. </p>
<p>For example, during the pandemic, Venn diagrams <a href="https://www.childrens.com/health-wellness/allergies-or-covid-19">helped to illustrate symptoms</a> of COVID-19 that are distinct from seasonal allergies. They are also often taught to school children and are typically part of the early curriculum for logic and databases in higher education.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/520557/original/file-20230412-18-o440sw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="John Venn." src="https://images.theconversation.com/files/520557/original/file-20230412-18-o440sw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/520557/original/file-20230412-18-o440sw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=815&fit=crop&dpr=1 600w, https://images.theconversation.com/files/520557/original/file-20230412-18-o440sw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=815&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/520557/original/file-20230412-18-o440sw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=815&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/520557/original/file-20230412-18-o440sw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1025&fit=crop&dpr=1 754w, https://images.theconversation.com/files/520557/original/file-20230412-18-o440sw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1025&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/520557/original/file-20230412-18-o440sw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1025&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">John Venn.</span>
<span class="attribution"><span class="source">wikipedia</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Venn <a href="https://www.britannica.com/biography/John-Venn">was born in Hull, UK,</a> in 1834. His early life in Hull was influenced by his father, an Anglican priest – it was expected John would follow in his footstep. He did initially begin a career in the Anglican church, but later moved into academia at the University of Cambridge. </p>
<p>One of Venn’s major achievements was to find a way to visualise a mathematical area called set theory. Set theory is an area of mathematics which can help to formally describe properties of collections of objects.</p>
<p>For example, we could have a set of cars, C. Within this set, there could be subsets such as the set of electric cars, E, the set of petrol based cars, say P, and the set of diesel powered cars, D. Given these, we can operate on them, for example, to apply car charges to the sets P and D, and a discount to the set E. </p>
<p>These sorts of operations form the basis of databases, as well as being used in many fundamental areas of science. Other major works of Venn’s include probability theory and symbolic logic. Venn had initially used diagrams developed by the Swiss <a href="https://mathshistory.st-andrews.ac.uk/Biographies/Euler/">mathematician Leonard Euler</a> to <a href="https://doi.org/10.1007/s10516-022-09642-2">show some relationships between sets</a>, which he then developed into his famous Venn diagrams. </p>
<p>Venn used the diagrams to prove a form of logical statement known as a categorical syllogism. This can be used to model reasoning. Here’s an example: “All computers need power. All AI systems are computers.” We can chain these together to the conclusion that “all AI systems need power”. </p>
<p>Today, we are familiar with such reasoning to illustrate how different collections relate to each other. For example, the SmartArt tool in Microsoft products lets you create a Venn diagram to illustrate the relationships between different sets. In our earlier car example, we could have a diagram showing electric cars, E, and petrol powered cars, P. The set of hybrid cars that have a petrol engine would be in the intersection of P and E.</p>
<h2>Logic and computing</h2>
<p>The visualisation of sets (and databases) is helpful, but the importance of Venn’s work then – and now – is the way they allowed proof of <a href="https://plato.stanford.edu/entries/boole/">George Boole’s ideas</a> of logic as a formal science. </p>
<p>Venn used his diagrams to illustrate and explore such “<a href="https://www.tandfonline.com/doi/full/10.1080/01445340.2020.1758387">symbolic logic</a>” – defending and extending it. Symbolic logic underpins modern computing, and Boolean logic is a key part of the design of modern computer systems – making his work relevant today. </p>
<p>Venn’s work was also crucial to the work of philosopher <a href="https://www.nobelprize.org/prizes/literature/1950/russell/biographical/">Bertrand Russell</a>, showing that there are problems that are unsolvable. We can express such problems with sets, in which each is an unsolvable problem. One such unsolvable problem can be expressed with the “<a href="https://www.britannica.com/topic/barber-paradox">Barber paradox</a>”. Suppose we had an article in Wikipedia containing all the articles that don’t contain themselves – a set. Is this new article itself in that set? </p>
<p>Luckily we can visualise that with a Venn diagram with two circles, where one circle is the set of entries that don’t include themselves, A, and the other circle is the set of entries that do include themselves, B. </p>
<p>We can then ask the question: where do we put the article that contains all the articles that don’t contain themselves? Have a think about it, then see where you would put it. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/519094/original/file-20230403-28-a4koz4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/519094/original/file-20230403-28-a4koz4.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=350&fit=crop&dpr=1 600w, https://images.theconversation.com/files/519094/original/file-20230403-28-a4koz4.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=350&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/519094/original/file-20230403-28-a4koz4.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=350&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/519094/original/file-20230403-28-a4koz4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=440&fit=crop&dpr=1 754w, https://images.theconversation.com/files/519094/original/file-20230403-28-a4koz4.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=440&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/519094/original/file-20230403-28-a4koz4.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=440&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A Venn diagram of two sets - a set A of articles that do contain themselves, and a set B of articles that don’t.</span>
</figcaption>
</figure>
<p>The problem is that it cannot be on the left, as it would contain itself, and would therefore be inconsistent. And it cannot be on the right, as then it would be missing, or incomplete. And it can’t be in both. It must be in one or the other. This paradox illustrates how unsolvable statements can arise – they are valid in terms of expressing them within the logical system, but ultimately unanswerable. We could possibly extend our system to solve this, but then we would end up with another unanswerable question. </p>
<p>Venn’s diagrams were crucial in understanding this. And this area of science is still important, for example when <a href="https://doi.org/10.1038/d41586-019-00083-3">considering the limitations</a> of machine learning and AI, where we may ask questions that cannot be answered. </p>
<p>Venn also had an interest in building mechanical machines – <a href="https://mathshistory.st-andrews.ac.uk/Biographies/Venn/">including a bowling machine</a> which proved so effective it was able to bowl out some top Australian batsmen of the day.</p>
<p>Following his abstract work on logic, he developed the concept of a logical-diagram machine with a lot of processing power: though this brilliant idea from 1881 would take many decades to appear as modern computers. </p>
<p>We remember Venn here in Hull, with a bridge close to his birthplace decorated with Venn circle inspired artwork. At the University of Hull’s main administration building, there’s an intersection of management and academia which is called the Venn building.</p><img src="https://counter.theconversation.com/content/199127/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Neil Gordon does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Venn diagrams have helped the development of logic and computing.Neil Gordon, Lecturer in Computer Science, University of HullLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1981322023-03-13T12:26:07Z2023-03-13T12:26:07ZWhat exactly is the internet? A computer scientist explains what it is and how it came to be<figure><img src="https://images.theconversation.com/files/514565/original/file-20230309-22-6ji5en.jpg?ixlib=rb-1.1.0&rect=0%2C7%2C5190%2C3441&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The internet is used for a lot more than just surfing the web.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/fourth-grade-students-work-on-laptops-in-class-royalty-free-image/608052049">Jonathan Kirn/The Image Bank via Getty Images</a></span></figcaption></figure><figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=293&fit=crop&dpr=1 600w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=293&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=293&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=368&fit=crop&dpr=1 754w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=368&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=368&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em><a href="https://theconversation.com/us/topics/curious-kids-us-74795">Curious Kids</a> is a series for children of all ages. If you have a question you’d like an expert to answer, send it to <a href="mailto:curiouskidsus@theconversation.com">curiouskidsus@theconversation.com</a>.</em></p>
<hr>
<blockquote>
<p><strong>What exactly is the internet? Nora, age 8, Akron, Ohio</strong></p>
</blockquote>
<hr>
<p>The internet is a global collection of computers that know how to send messages to one another. Practically everything connected to the internet is indeed a computer – or has one “baked inside” of it. </p>
<p>In the early 1960s, computers were used only for special purposes, like <a href="https://www.sciencesource.com/1756131-livermore-advanced-research-computer-1960.html">scientific research</a>. There weren’t a lot of them because they were large and expensive. One computer and its attached accessories could <a href="https://www.pimall.com/nais/pivintage/burroughscomputer.html">easily fill a room</a>. To exchange data, people would plan time to work together, and one computer would <a href="https://medium.com/dish/75-years-of-innovation-acoustic-modem-6a5e56e5b6ee">connect to another with a telephone call</a>.</p>
<p>The U.S. government wanted a network that would allow computers to communicate automatically and <a href="https://www.internethalloffame.org/2012/09/06/what-do-h-bomb-and-internet-have-common-paul-baran/">even if some telephone lines were cut off</a>. Suppose you wanted to send a message from Computer A to Computer B in each of three different types of networks. The first is a network with one central computer connected to all the others as spokes. The second is a network of several of these hub-and-spoke networks with their hubs connected. The third is a network where every computer is connected to several others, forming a kind of mesh. Which do you think would be most reliable if some computers and links were damaged? </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/514724/original/file-20230310-462-lhuuzz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="three diagrams showing many tiny figures connected by lines" src="https://images.theconversation.com/files/514724/original/file-20230310-462-lhuuzz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/514724/original/file-20230310-462-lhuuzz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=451&fit=crop&dpr=1 600w, https://images.theconversation.com/files/514724/original/file-20230310-462-lhuuzz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=451&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/514724/original/file-20230310-462-lhuuzz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=451&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/514724/original/file-20230310-462-lhuuzz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=567&fit=crop&dpr=1 754w, https://images.theconversation.com/files/514724/original/file-20230310-462-lhuuzz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=567&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/514724/original/file-20230310-462-lhuuzz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=567&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">To get a message from A to B, which type of network is most likely to keep working if some of the lines are cut?</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:P2P_Topology.jpg">Txelu Balboa via Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>The first network is vulnerable, because if the central computer is lost, then none of the computers can communicate. The second network is vulnerable because if any of the hub computers are lost, the path between A and B is cut. But in the third network, many individual computers and links could be lost and there would still be a path to connect A and B. So the third network would be the most reliable.</p>
<h2>Hot potatoes</h2>
<p>An American engineer named <a href="https://www.rand.org/about/history/baran.html">Paul Baran</a> worked on this problem at a company called the Rand Corp. In 1962, he published a new idea for computer networks, which he called “<a href="https://culture.pl/en/article/how-paul-baran-invented-the-internet">hot potato networking</a>.”</p>
<p>In Baran’s idea, a message would be broken up into lots of little pieces – the potatoes. When Computer A wanted to sent its message to Computer B, it would individually send the little potatoes to a neighbor computer. That computer would pass it along in the right direction as soon as it could. To make sure messages were delivered quickly, the message pieces were treated as if they were hot, so you didn’t want them in your hands for too long.</p>
<p>The messages included a sequence number so when they arrived at Computer B, the final destination computer, that machine would know how to put them in the proper order to receive the full message.</p>
<p>Baran’s idea got implemented as <a href="https://www.ibiblio.org/pioneers/baran.html">the ARPANET</a>. This network was the immediate predecessor to today’s internet. </p>
<p>Instead of hot potatoes, the system got a more formal name, which we still use: “<a href="https://www.geeksforgeeks.org/packet-switched-network-psn-in-networking/">packet switched networking</a>.” The potato got renamed as a packet – a small piece of the full message. </p>
<p><a href="https://en.wikipedia.org/wiki/Vint_Cerf">Vinton Cerf</a>, an American computer scientist, is known as one of the fathers of the internet. He contributed many essential ideas, including that the receiving computer could ask the sending computer for a packet that went missing – which they sometimes do. This has the name <a href="https://www.wired.com/2012/04/epicenter-isoc-famers-qa-cerf/">Transmission Control Protocol</a>, or TCP.</p>
<h2>A web of pages</h2>
<p>Another important contributor was <a href="https://en.wikipedia.org/wiki/Tim_Berners-Lee">Tim Berners-Lee</a>, a British computer scientist. Berners-Lee was working at CERN, the European Organization for Nuclear Research. He wanted to create a system for his colleagues to better share their research results with one another.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/514746/original/file-20230310-22-cbebaz.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a photograph of a man sitting in front of a cathode ray tube computer monitor" src="https://images.theconversation.com/files/514746/original/file-20230310-22-cbebaz.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/514746/original/file-20230310-22-cbebaz.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=385&fit=crop&dpr=1 600w, https://images.theconversation.com/files/514746/original/file-20230310-22-cbebaz.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=385&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/514746/original/file-20230310-22-cbebaz.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=385&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/514746/original/file-20230310-22-cbebaz.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=484&fit=crop&dpr=1 754w, https://images.theconversation.com/files/514746/original/file-20230310-22-cbebaz.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=484&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/514746/original/file-20230310-22-cbebaz.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=484&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Tim Berners-Lee invented the World Wide Web in the early 1990s.</span>
<span class="attribution"><a class="source" href="https://cds.cern.ch/images/CERN-GE-9407011-31">CERN</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>Around 1990, Berners-Lee came up with the idea that a computer could host a collection of “pages,” each of which had <a href="https://webfoundation.org/about/vision/history-of-the-web/">text, images and links to other pages</a>. He created an easy way for links to specify any computer – the concept of the URL, or <a href="https://www.welcometothejungle.com/en/articles/btc-url-internet">Uniform Resource Locator</a>.</p>
<p>Berners-Lee named the system the <a href="https://thenextweb.com/news/how-the-world-wide-web-was-nearly-called-the-information-mesh">World Wide Web</a>. He wrote the code for the first web browser, to view web pages, and web server, to deliver them. If you see a URL that includes “www” – that’s from the original name.</p>
<p>Berners-Lee may have been planning to use the web particularly to share text, images and files. But the earlier work on the internet <a href="https://www.thebroadcastbridge.com/content/entry/10882/a-brief-history-of-ip-audio-networks">made the web suitable for video and sound, too</a>. YouTube, Instagram and TikTok are built using the same rules, or protocols, developed by Cerf and Berners-Lee.</p>
<h2>Internet of Things</h2>
<p>In the past 20 years, computers have become even more powerful and inexpensive. Now, a computer chip that can <a href="https://www.nabto.com/how-much-iot-device-cost-business/">connect directly to the internet sells for US$5</a> – a lot less than today’s laptops and cellphones (about $300) or yesterday’s room-size computers ($1 million or more!). </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/514561/original/file-20230309-22-mth9ci.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a refrigerator with a water dispenser on the left door and a large display screen on the right door" src="https://images.theconversation.com/files/514561/original/file-20230309-22-mth9ci.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/514561/original/file-20230309-22-mth9ci.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=659&fit=crop&dpr=1 600w, https://images.theconversation.com/files/514561/original/file-20230309-22-mth9ci.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=659&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/514561/original/file-20230309-22-mth9ci.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=659&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/514561/original/file-20230309-22-mth9ci.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=829&fit=crop&dpr=1 754w, https://images.theconversation.com/files/514561/original/file-20230309-22-mth9ci.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=829&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/514561/original/file-20230309-22-mth9ci.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=829&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Many newer appliances like this smart refrigerator are connected to the internet.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/Smart_refrigerator#/media/File:Samsungfamilyhub.png">Paul Stefaan Mooij/Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA</a></span>
</figcaption>
</figure>
<p>This lower cost has led to <a href="https://www.statista.com/statistics/1183457/iot-connected-devices-worldwide/">millions upon millions</a> of devices connected to the internet. These devices include sensors. A <a href="https://www.safewise.com/smart-home-faq/how-do-smart-thermostats-work/">smart thermostat</a> monitors your house using a temperature sensor. A security camera keeps an eye on your front porch using an array of tiny light sensors.</p>
<p>These devices also include <a href="https://www.geeksforgeeks.org/actuators-in-iot/">actuators – mechanisms that control activity</a> in the physical world. For example, a smart thermostat can turn on and off the heating and cooling systems in your house.</p>
<p>Together, all these smart devices are called the <a href="https://www.wired.co.uk/article/internet-of-things-what-is-explained-iot">Internet of Things</a>, or IoT. The internet includes not only computers and phones, but all these IoT devices. You may have a <a href="https://www.wired.co.uk/article/internet-of-things-what-is-explained-iot">smart refrigerator</a> that has a camera inside of it. When it notices you’re out of milk, it will send a message to your cellphone, reminding you to buy more.</p>
<p>Just about everything is connected to the internet now.</p>
<hr>
<p><em>Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to <a href="mailto:curiouskidsus@theconversation.com">CuriousKidsUS@theconversation.com</a>. Please tell us your name, age and the city where you live.</em></p><img src="https://counter.theconversation.com/content/198132/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Fred Martin receives funding from the National Science Foundation and Google.</span></em></p>Almost everybody uses the internet just about every day. But do you really know what the internet is?Fred Martin, Professor of Computer Science, UMass LowellLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1919302023-01-30T13:12:46Z2023-01-30T13:12:46ZLimits to computing: A computer scientist explains why even in the age of AI, some problems are just too difficult<figure><img src="https://images.theconversation.com/files/506497/original/file-20230125-24-e7inac.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5700%2C3788&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Computers are growing more powerful and more capable, but everything has limits.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/futuristic-semiconductor-and-circuit-board-royalty-free-image/1366897838">Yuichiro Chino/Moment via Getty Images</a></span></figcaption></figure><p>Empowered by artificial intelligence technologies, computers today can <a href="https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-chatbot-messages/672411/">engage in convincing conversations</a> with people, <a href="https://www.nbcnews.com/mach/science/ai-can-now-compose-pop-music-even-symphonies-here-s-ncna1010931">compose songs</a>, <a href="https://www.nytimes.com/2022/04/06/technology/openai-images-dall-e.html">paint paintings</a>, play <a href="https://www.wired.com/story/alphabets-latest-ai-show-pony-has-more-than-one-trick/">chess and go</a>, and <a href="https://doi.org/10.1007/s12652-021-03612-z">diagnose diseases</a>, to name just a few examples of their technological prowess. </p>
<p>These successes could be taken to indicate that computation has no limits. To see if that’s the case, it’s important to understand what makes a computer powerful. </p>
<p>There are two aspects to a computer’s power: the number of operations its hardware can execute per second and the efficiency of the algorithms it runs. The hardware speed is limited by the laws of physics. Algorithms – basically <a href="https://theconversation.com/what-is-an-algorithm-how-computers-know-what-to-do-with-data-146665">sets of instructions</a> – are written by humans and translated into a sequence of operations that computer hardware can execute. Even if a computer’s speed could reach the physical limit, computational hurdles remain due to the limits of algorithms.</p>
<p>These hurdles include problems that are impossible for computers to solve and problems that are theoretically solvable but in practice are beyond the capabilities of even the most powerful versions of today’s computers imaginable. Mathematicians and computer scientists attempt to determine whether a problem is solvable by trying them out on an imaginary machine.</p>
<h2>An imaginary computing machine</h2>
<p>The modern notion of an algorithm, known as a Turing machine, was formulated in 1936 by British mathematician <a href="https://www.britannica.com/biography/Alan-Turing/Computer-designer">Alan Turing</a>. It’s an imaginary device that imitates how arithmetic calculations are carried out with a pencil on paper. The Turing machine is the template all computers today are based on.</p>
<p>To accommodate computations that would need more paper if done manually, the supply of imaginary paper in a <a href="https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/turing-machine/one.html">Turing machine</a> is assumed to be unlimited. This is equivalent to an imaginary limitless ribbon, or “tape,” of squares, each of which is either blank or contains one symbol. </p>
<p>The machine is controlled by a finite set of rules and starts on an initial sequence of symbols on the tape. The operations the machine can carry out are moving to a neighboring square, erasing a symbol and writing a symbol on a blank square. The machine computes by carrying out a sequence of these operations. When the machine finishes, or “halts,” the symbols remaining on the tape are the output or result. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/dNRDvLACg5Q?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">What is a Turing machine?</span></figcaption>
</figure>
<p>Computing is often about decisions with yes or no answers. By analogy, a medical test (type of problem) checks if a patient’s specimen (an instance of the problem) has a certain disease indicator (yes or no answer). The instance, represented in a Turing machine in digital form, is the initial sequence of symbols. </p>
<p>A problem is considered “solvable” if a Turing machine can be designed that halts for every instance whether positive or negative and correctly determines which answer the instance yields. </p>
<h2>Not every problem can be solved</h2>
<p>Many problems are solvable using a Turing machine and therefore can be solved on a computer, while many others are not. For example, the domino problem, a variation of the tiling problem formulated by Chinese American mathematician <a href="https://digitalcommons.rockefeller.edu/faculty-members/109/">Hao Wang</a> in 1961, is not solvable. </p>
<p>The task is to use a set of dominoes to cover an entire grid and, following the rules of most dominoes games, matching the number of pips on the ends of abutting dominoes. It turns out that there is no algorithm that can start with a set of dominoes and determine whether or not the set will completely cover the grid.</p>
<h2>Keeping it reasonable</h2>
<p>A number of solvable problems can be solved by algorithms that halt in a reasonable amount of time. These “<a href="https://mathworld.wolfram.com/PolynomialTime.html">polynomial-time algorithms</a>” are efficient algorithms, meaning it’s practical to use computers to solve instances of them.</p>
<p>Thousands of other solvable problems are not known to have polynomial-time algorithms, despite ongoing intensive efforts to find such algorithms. These include the Traveling Salesman Problem. </p>
<p>The Traveling Salesman Problem asks whether a set of points with some points directly connected, called a graph, has a path that starts from any point and goes through every other point exactly once, and comes back to the original point. Imagine that a salesman wants to find a route that passes all households in a neighborhood exactly once and returns to the starting point. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/xi5dWND499g?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The Traveling Salesman Problem quickly gets out of hand when you get beyond a few destinations.</span></figcaption>
</figure>
<p>These problems, called <a href="https://www.mathsisfun.com/sets/np-complete.html">NP-complete</a>, were independently formulated and shown to exist in the early 1970s by two computer scientists, American Canadian <a href="https://amturing.acm.org/award_winners/cook_n991950.cfm">Stephen Cook</a> and Ukrainian American <a href="https://academickids.com/encyclopedia/index.php/Leonid_Levin">Leonid Levin</a>. Cook, whose work came first, was awarded the 1982 Turing Award, the highest in computer science, for this work.</p>
<h2>The cost of knowing exactly</h2>
<p>The best-known algorithms for NP-complete problems are essentially searching for a solution from all possible answers. The Traveling Salesman Problem on a graph of a few hundred points would take years to run on a supercomputer. Such algorithms are inefficient, meaning there are no mathematical shortcuts.</p>
<p>Practical algorithms that address these problems in the real world can only offer approximations, though <a href="https://theconversation.com/planning-the-best-route-with-multiple-destinations-is-hard-even-for-supercomputers-a-new-approach-breaks-a-barrier-thats-stood-for-nearly-half-a-century-148308">the approximations are improving</a>. Whether there are efficient polynomial-time algorithms that can <a href="https://www.claymath.org/millennium-problems/p-vs-np-problem">solve NP-complete problems</a> is among the <a href="https://www.claymath.org/millennium-problems/millennium-prize-problems">seven millennium open problems</a> posted by the Clay Mathematics Institute at the turn of the 21st century, each carrying a prize of US$1 million.</p>
<h2>Beyond Turing</h2>
<p>Could there be a new form of computation beyond Turing’s framework? In 1982, American physicist <a href="http://www.richardfeynman.com/">Richard Feynman</a>, a Nobel laureate, put forward the idea of computation based on quantum mechanics. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/jHoEjvuPoB8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">What is a quantum computer?</span></figcaption>
</figure>
<p>In 1995, Peter Shor, an American applied mathematician, presented a quantum algorithm to <a href="https://www.geeksforgeeks.org/shors-factorization-algorithm/">factor integers in polynomial time</a>. Mathematicians believe that this is unsolvable by polynomial-time algorithms in Turing’s framework. Factoring an integer means finding a smaller integer greater than 1 that can divide the integer. For example, the integer 688,826,081 is divisible by a smaller integer 25,253, because 688,826,081 = 25,253 x 27,277. </p>
<p>A major algorithm called the <a href="https://www.geeksforgeeks.org/rsa-algorithm-cryptography/">RSA algorithm</a>, widely used in securing network communications, is based on the computational difficulty of factoring large integers. Shor’s result suggests that quantum computing, should it become a reality, will <a href="https://theconversation.com/quantum-computers-threaten-our-whole-cybersecurity-infrastructure-heres-how-scientists-can-bulletproof-it-196065">change the landscape of cybersecurity</a>. </p>
<p>Can a full-fledged quantum computer be built to factor integers and solve other problems? Some scientists believe it can be. Several groups of scientists around the world are working to build one, and some have already built small-scale quantum computers. </p>
<p>Nevertheless, like all novel technologies invented before, issues with quantum computation are almost certain to arise that would impose new limits.</p><img src="https://counter.theconversation.com/content/191930/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jie Wang does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>In the age of AI, people might wonder if there’s anything computers can’t do. The answer is yes. In fact, there are numerous problems that are beyond the reach of even the most powerful computers.Jie Wang, Professor of Computer Science, UMass LowellLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1970502023-01-09T03:56:37Z2023-01-09T03:56:37ZAI might be seemingly everywhere, but there are still plenty of things it can’t do – for now<figure><img src="https://images.theconversation.com/files/503561/original/file-20230109-19-yxxit5.jpg?ixlib=rb-1.1.0&rect=5%2C379%2C3828%2C2379&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/hJ5uMIRNg5k">Mahdis Mousavi/Unsplash</a></span></figcaption></figure><p>These days, we don’t have to wait long until the next breakthrough in artificial intelligence (AI) impresses everyone with capabilities that previously belonged only in science fiction. </p>
<p>In 2022, <a href="https://theconversation.com/text-to-image-ai-powerful-easy-to-use-technology-for-making-art-and-fakes-195517">AI art generation tools</a> such as Open AI’s DALL-E 2, Google’s Imagen, and Stable Diffusion took the internet by storm, with users generating high-quality images from text descriptions.</p>
<p>Unlike previous developments, these text-to-image tools quickly found their way from research labs to <a href="https://www.vox.com/recode/2023/1/5/23539055/generative-ai-chatgpt-stable-diffusion-lensa-dall-e">mainstream culture</a>, leading to viral phenomena such as the “Magic Avatar” feature in the Lensa AI app, which creates stylised images of its users.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/no-the-lensa-ai-app-technically-isnt-stealing-artists-work-but-it-will-majorly-shake-up-the-art-world-196480">No, the Lensa AI app technically isn’t stealing artists' work – but it will majorly shake up the art world</a>
</strong>
</em>
</p>
<hr>
<p>In December, a chatbot called ChatGPT stunned users with its <a href="https://theconversation.com/the-chatgpt-chatbot-is-blowing-people-away-with-its-writing-skills-an-expert-explains-why-its-so-impressive-195908">writing skills</a>, leading to predictions the technology will soon be able to <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4314839">pass professional exams</a>. ChatGPT reportedly gained one million users in less than a week. Some school officials have already <a href="https://www.abc.net.au/news/2023-01-08/artificial-intelligence-chatgpt-chatbot-explained/101835670">banned it</a> for fear students would use it to write essays. Microsoft is <a href="https://www.theguardian.com/technology/2023/jan/05/microsoft-chatgpt-bing-search-engine">reportedly</a> planning to incorporate ChatGPT into its Bing web search and Office products later this year. </p>
<p>What does the unrelenting progress in AI mean for the near future? And is AI likely to threaten certain jobs in the following years?</p>
<p>Despite these impressive recent AI achievements, we need to recognise there are still significant limitations to what AI systems can do. </p>
<h2>AI excels at pattern recognition</h2>
<p>Recent advances in AI rely predominantly on machine learning algorithms that discern complex patterns and relationships from vast amounts of data. This training is then used for tasks like prediction and data generation. </p>
<p>The development of current AI technology relies on optimising predictive power, even if the goal is to generate new output. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/not-everything-we-call-ai-is-actually-artificial-intelligence-heres-what-you-need-to-know-196732">Not everything we call AI is actually 'artificial intelligence'. Here's what you need to know</a>
</strong>
</em>
</p>
<hr>
<p>For example, GPT-3, the language model behind ChatGPT, was trained to predict what follows a piece of text. GPT-3 then leverages this predictive ability to continue an input text given by the user. </p>
<p>“Generative AIs” such as ChatGPT and DALL-E 2 have sparked <a href="https://www.theguardian.com/technology/2022/nov/12/when-ai-can-make-art-what-does-it-mean-for-creativity-dall-e-midjourney">much debate</a> about whether AI can be genuinely creative and even rival humans in this regard. However, human creativity draws not only on past data but also on experimentation and the full range of human experience.</p>
<h2>Cause and effect</h2>
<p>Many important problems require predicting the effects of our actions in complex, uncertain, and constantly changing environments. By doing this, we can choose the sequence of actions most likely to achieve our goals. </p>
<p>But <a href="https://www.theatlantic.com/technology/archive/2018/05/machine-learning-is-stuck-on-asking-why/560675/">algorithms cannot learn</a> about causes and effects from data alone. Purely data-driven machine learning can only find correlations.</p>
<p>To understand why this is a problem for AI, we can contrast the problems of diagnosing a medical condition versus choosing a treatment. </p>
<p>Machine learning models are often helpful for finding abnormalities in medical images – this is a pattern recognition problem. We don’t need to worry about causality because abnormalities are already either present or not. </p>
<p>But choosing the best treatment for a diagnosis is a fundamentally different problem. Here, the goal is to influence the outcome, not just recognise a pattern. To determine the effectiveness of a treatment, medical researchers run randomised controlled trials. This way, they can try to control any factors that might affect the treatment.</p>
<p>The confusion between these two types of problems sometimes leads to <a href="https://journals.sagepub.com/doi/10.1509/jmr.16.0163">suboptimal applications</a> of machine learning in organisations. </p>
<p>While the success of recent work in AI demonstrates the value of data-driven models, many of the problems we would want computers to solve require an understanding of causation.</p>
<p>Current AI systems lack this ability, apart from specialised applications such as <a href="https://theconversation.com/an-ai-named-cicero-can-beat-humans-in-diplomacy-a-complex-alliance-building-game-heres-why-thats-a-big-deal-195208">board games</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/503563/original/file-20230109-13-d2itkr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A colourful pixellated image with vague outlines of several human eyes and mouths" src="https://images.theconversation.com/files/503563/original/file-20230109-13-d2itkr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/503563/original/file-20230109-13-d2itkr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=320&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503563/original/file-20230109-13-d2itkr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=320&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503563/original/file-20230109-13-d2itkr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=320&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503563/original/file-20230109-13-d2itkr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=402&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503563/original/file-20230109-13-d2itkr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=402&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503563/original/file-20230109-13-d2itkr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=402&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A Stable Diffusion artwork generated by the prompt ‘The limits of artificial intelligence’. Current AI technology tends to perform poorly in unexpected situations. In this case, the prompt is not well represented in Stable Diffusion’s training data.</span>
<span class="attribution"><span class="source">Stable Diffusion</span></span>
</figcaption>
</figure>
<h2>Common sense reasoning</h2>
<p>Language models such GPT-3 and ChatGPT can successfully solve some tasks requiring common-sense reasoning.</p>
<p>However, the following interaction with ChatGPT, adapted from an experiment by <a href="https://cs.nyu.edu/%7Edavise/papers/GPT3CompleteTests.html">Gary Marcus</a> suggests it is not entirely reliable in this respect. </p>
<blockquote>
<p><strong>I’m eight years old. When I was born, my biological mother was in Barcelona and my father was in Tokyo. Where was I born? Think through this step by step.</strong></p>
<p>It is not mentioned where you were born. All we know is that your biological mother was in Barcelona and your father was in Tokyo at the time of your birth.</p>
</blockquote>
<p>Whether AI systems such as ChatGPT can achieve common sense is a subject of lively debate among experts.</p>
<p>Sceptics such as Marcus point out we cannot trust language models to robustly display common sense since they neither have it built into them nor are directly optimised for it. Optimists argue that while current systems are imperfect, common sense may <a href="https://yaofu.notion.site/How-does-GPT-Obtain-its-Ability-Tracing-Emergent-Abilities-of-Language-Models-to-their-Sources-b9a57ac0fcf74f30a1ab9e3e36fa1dc1">spontaneously emerge</a> in sufficiently advanced language models. </p>
<h2>Human values</h2>
<p>Whenever groundbreaking AI systems are released, news articles and social media posts documenting <a href="https://theintercept.com/2022/12/08/openai-chatgpt-ai-bias-ethics/">racist</a>, <a href="https://theconversation.com/online-translators-are-sexist-heres-how-we-gave-them-a-little-gender-sensitivity-training-157846">sexist</a>, and other types of <a href="https://www.polygon.com/23513386/ai-art-lensa-magic-avatars-artificial-intelligence-explained-stable-diffusion">biased</a> and <a href="https://medium.com/@guruduth.banavar/chatgpts-deep-fake-text-generation-is-a-threat-to-evidence-based-discourse-c096164207e0">harmful behaviours</a> inevitably follow. </p>
<p>This flaw is inherent to current AI systems, which are bound to be a reflection of their data. Human values such as truth and fairness are not fundamentally built into the algorithms – that’s something researchers don’t yet know how to do.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1384173525368393736"}"></div></p>
<p>While researchers are <a href="https://openai.com/blog/language-model-safety-and-misuse/">learning the lessons</a> from past episodes and <a href="https://openai.com/blog/reducing-bias-and-improving-safety-in-dall-e-2/">making progress</a> in addressing bias, the field of AI still has a <a href="https://humancompatible.ai/progress-report/">long way to go</a> to robustly align AI systems with human values and preferences.</p><img src="https://counter.theconversation.com/content/197050/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marcel Scharth does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>From ChatGPT to Lensa, it feels like AI is here to take over. But despite some impressive results, such systems still have plenty of limitations.Marcel Scharth, Lecturer in Business Analytics, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1967322022-12-22T03:51:50Z2022-12-22T03:51:50ZNot everything we call AI is actually ‘artificial intelligence’. Here’s what you need to know<figure><img src="https://images.theconversation.com/files/502511/original/file-20221222-23-2rjrbe.jpg?ixlib=rb-1.1.0&rect=84%2C47%2C6120%2C3666&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">ktsdesign/Shutterstock</span></span></figcaption></figure><p>In August 1955, a group of scientists made a funding request for US$13,500 to host a summer workshop at Dartmouth College, New Hampshire. The field they proposed to explore was artificial intelligence (AI).</p>
<p>While the funding request was humble, <a href="http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf">the conjecture of the researchers was not</a>: “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”.</p>
<p>Since these humble beginnings, movies and media have romanticised AI or cast it as a villain. Yet for most people, AI has remained as a point of discussion and not part of a conscious lived experience.</p>
<h2>AI has arrived in our lives</h2>
<p>Late last month, AI, <a href="https://theconversation.com/the-dawn-of-ai-has-come-and-its-implications-for-education-couldnt-be-more-significant-196383">in the form of ChatGPT</a>, broke free from the sci-fi speculations and research labs and onto the desktops and phones of the general public. It’s what’s known as a “generative AI” – suddenly, a cleverly worded prompt can produce an essay or put together a recipe and shopping list, or create a poem in the style of Elvis Presley.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-chatgpt-chatbot-is-blowing-people-away-with-its-writing-skills-an-expert-explains-why-its-so-impressive-195908">The ChatGPT chatbot is blowing people away with its writing skills. An expert explains why it's so impressive</a>
</strong>
</em>
</p>
<hr>
<p>While ChatGPT has been the most dramatic entrant in a year of generative AI success, similar systems have shown even wider potential to create new content, with text-to-image prompts used to create vibrant images that <a href="https://theconversation.com/ai-art-is-everywhere-right-now-even-experts-dont-know-what-it-will-mean-189800">have even won art competitions</a>.</p>
<p>AI may not yet have a living consciousness or a theory of mind popular in sci-fi movies and novels, but it is getting closer to at least disrupting what we think artificial intelligence systems can do.</p>
<p>Researchers working closely with these systems have swooned under <a href="https://slate.com/technology/2022/06/google-ai-sentience-lamda.html">the prospect of sentience</a>, as in the case with Google’s large language model (LLM) LaMDA. An LLM is a model that has been trained to process and generate natural language.</p>
<p>Generative AI has also produced worries about plagiarism, exploitation of original content used to create models, <a href="https://theconversation.com/the-galactica-ai-model-was-trained-on-scientific-knowledge-but-it-spat-out-alarmingly-plausible-nonsense-195445">ethics of information manipulation</a> and abuse of trust, and even “<a href="https://cacm.acm.org/magazines/2023/1/267976-the-end-of-programming/fulltext">the end of programming</a>”.</p>
<p>At the centre of all this is the question that has been growing in urgency since the Dartmouth summer workshop: does AI differ from human intelligence?</p>
<h2>What does ‘AI’ actually mean?</h2>
<p>To qualify as AI, a system must exhibit some level of learning and adapting. For this reason, decision-making systems, automation, and statistics are not AI. </p>
<p>AI is broadly defined in two categories: artificial narrow intelligence (ANI) and artificial general intelligence (AGI). To date, AGI does not exist.</p>
<p>The key challenge for creating a general AI is to adequately model the world with all the entirety of knowledge, in a consistent and useful manner. That’s a massive undertaking, to say the least.</p>
<p>Most of what we know as AI today has narrow intelligence – where a particular system addresses a particular problem. Unlike human intelligence, such narrow AI intelligence is effective <em>only</em> in the area in which it has been trained: fraud detection, facial recognition or social recommendations, for example.</p>
<p>AGI, however, would function as humans do. For now, the most notable example of trying to achieve this is the use of neural networks and “deep learning” trained on vast amounts of data.</p>
<p>Neural networks are inspired by the way human brains work. Unlike most machine learning models that run calculations on the training data, neural networks work by feeding each data point one by one through an interconnected network, each time adjusting the parameters.</p>
<p>As more and more data are fed through the network, the parameters stabilise; the final outcome is the “trained” neural network, which can then produce the desired output on new data – for example, recognising whether an image contains a cat or a dog.</p>
<p>The significant leap forward in AI today is driven by technological improvements in the way we can train large neural networks, readjusting vast numbers of parameters in each run thanks to the capabilities of large cloud-computing infrastructures. For example, GPT-3 (the AI system that powers ChatGPT) is a large neural network <a href="https://www.springboard.com/blog/data-science/machine-learning-gpt-3-open-ai/">with 175 billion parameters</a>.</p>
<h2>What does AI need to work?</h2>
<p>AI needs three things to be successful.</p>
<p>First, it needs <strong>high-quality, unbiased data</strong>, and lots of it. Researchers building neural networks use the large data sets that have come about as society has digitised.</p>
<p>Co-Pilot, for augmenting human programmers, draws its data from billions of lines of code shared on GitHub. ChatGPT and other large language models use the billions of websites and text documents stored online.</p>
<p>Text-to-image tools, such as Stable Diffusion, DALLE-2, and Midjourney, use image-text pairs from data sets such as <a href="https://laion.ai/blog/laion-5b/">LAION-5B</a>. AI models will continue to evolve in sophistication and impact as we digitise more of our lives, and provide them with alternative data sources, such as simulated data or data from game settings like <a href="https://minerl.io">Minecraft</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/no-the-lensa-ai-app-technically-isnt-stealing-artists-work-but-it-will-majorly-shake-up-the-art-world-196480">No, the Lensa AI app technically isn’t stealing artists' work – but it will majorly shake up the art world</a>
</strong>
</em>
</p>
<hr>
<p>AI also needs <strong>computational infrastructure</strong> for effective training. As computers become more powerful, models that now require intensive efforts and large-scale computing may in the near future be handled locally. Stable Diffusion, for example, can already be run on local computers rather than cloud environments.</p>
<p>The third need for AI is <strong>improved models and algorithms</strong>. Data-driven systems continue to make rapid progress in <a href="https://www.eff.org/ai/metrics">domain after domain</a> once thought to be the territory of human cognition.</p>
<p>However, as the world around us constantly changes, AI systems need to be constantly retrained using new data. Without this crucial step, AI systems will produce answers that are factually incorrect, or do not take into account new information that’s emerged since they were trained.</p>
<p>Neural networks aren’t the only approach to AI. Another prominent camp in artificial intelligence research is <a href="https://knowablemagazine.org/article/technology/2020/what-is-neurosymbolic-ai">symbolic AI</a> – instead of digesting huge data sets, it relies on rules and knowledge similar to the human process of forming internal symbolic representations of particular phenomena.</p>
<p>But the balance of power has heavily tilted toward data-driven approaches over the last decade, with the “founding fathers” of modern deep learning <a href="https://awards.acm.org/about/2018-turing">recently being awarded the Turing Prize</a>, the equivalent of the Nobel Prize in computer science. </p>
<p>Data, computation and algorithms form the foundation of the future of AI. All indicators are that rapid progress will be made in all three categories in the foreseeable future.</p><img src="https://counter.theconversation.com/content/196732/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>George Siemens does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Artificial intelligence has arrived. But what is it, exactly – and what’s behind some of the most splashy AIs we have encountered to date?George Siemens, Co-Director, Professor, Centre for Change and Complexity in Learning, University of South AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1939302022-12-08T23:04:04Z2022-12-08T23:04:04ZAda Lovelace’s skills with language, music and needlepoint contributed to her pioneering work in computing<figure><img src="https://images.theconversation.com/files/499373/original/file-20221206-10118-sz9tym.jpg?ixlib=rb-1.1.0&rect=0%2C14%2C2435%2C1657&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Ada King, Countess of Lovelace, was more than just another mathematician.</span> <span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/a/a4/Ada_Lovelace_portrait.jpg">Watercolor portrait of Ada King, Countess of Lovelace by Alfred Edward Chalon via Wikimedia</a></span></figcaption></figure><p>Ada Lovelace, known as the first computer programmer, was born on Dec. 10, 1815, more than a century before digital electronic computers were developed. </p>
<p>Lovelace has been hailed as a model for girls in science, technology, engineering and math (STEM). A dozen biographies for young audiences were published for the 200th anniversary of her birth in 2015. And in 2018, <a href="https://www.nytimes.com/interactive/2018/obituaries/overlooked-ada-lovelace.html">The New York Times added hers</a> as one of the first “missing obituaries” of women at the rise of the #MeToo movement. </p>
<p>But Lovelace – properly Ada King, Countess of Lovelace after her marriage – drew on many different fields for her innovative work, including languages, music and needlecraft, in addition to mathematical logic. Recognizing that her well-rounded education enabled her to accomplish work that was well ahead of her time, she can be a model for all students, not just girls. </p>
<p>Lovelace was the daughter of the scandal-ridden romantic poet George Gordon Byron, aka Lord Byron, and his highly educated and strictly religious wife Anne Isabella Noel Byron, known as Lady Byron. Lovelace’s parents separated shortly after her birth. At a time when women were not allowed to own property and had few legal rights, her mother managed to secure custody of her daughter.</p>
<p>Growing up in a privileged aristocratic family, Lovelace was educated by home tutors, <a href="https://blogs.bodleian.ox.ac.uk/adalovelace/2018/07/27/ada-lovelace-the-making-of-a-computer-scientist/">as was common for girls like her</a>. She received lessons in French and Italian, music and in suitable handicrafts such as embroidery. Less common for a girl in her time, she also studied math. Lovelace continued to work with math tutors into her adult life, and she eventually corresponded with mathematician and logician <a href="https://www.britannica.com/biography/Augustus-De-Morgan">Augustus De Morgan</a> at London University about symbolic logic. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/499374/original/file-20221206-8973-zv7gqi.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="antique black-and-white photograph of a woman in an elaborate outfit" src="https://images.theconversation.com/files/499374/original/file-20221206-8973-zv7gqi.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/499374/original/file-20221206-8973-zv7gqi.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=750&fit=crop&dpr=1 600w, https://images.theconversation.com/files/499374/original/file-20221206-8973-zv7gqi.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=750&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/499374/original/file-20221206-8973-zv7gqi.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=750&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/499374/original/file-20221206-8973-zv7gqi.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=942&fit=crop&dpr=1 754w, https://images.theconversation.com/files/499374/original/file-20221206-8973-zv7gqi.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=942&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/499374/original/file-20221206-8973-zv7gqi.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=942&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A rare photograph of Ada Lovelace.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/b/b7/Ada_Byron_daguerreotype_by_Antoine_Claudet_1843_or_1850_-_cropped.png">Daguerreotype by Antoine Claudet via Wikimedia</a></span>
</figcaption>
</figure>
<h2>Lovelace’s algorithm</h2>
<p>Lovelace drew on all of these lessons when she wrote her <a href="https://catalog.lindahall.org/discovery/delivery/01LINDAHALL_INST:LHL/12100178280005961#page=680">computer program</a> – in reality, it was a set of instructions for a mechanical calculator that had been built only in parts. </p>
<p>The computer in question was the <a href="https://www.computerhistory.org/babbage/engines/">Analytical Engine</a> designed by mathematician, philosopher and inventor <a href="https://www.britannica.com/biography/Charles-Babbage">Charles Babbage</a>. Lovelace had met Babbage when she was introduced to London society. The two related to each other over their shared love for mathematics and fascination for mechanical calculation. By the early 1840s, Babbage had won and lost government funding for a mathematical calculator, fallen out with the skilled craftsman building the precision parts for his machine, and was close to giving up on his project. At this point, Lovelace stepped in as an advocate. </p>
<p>To make Babbage’s calculator known to a British audience, Lovelace proposed to translate into English an article that described the Analytical Engine. The article was written in French by the Italian mathematician <a href="https://mathshistory.st-andrews.ac.uk/Biographies/Menabrea/">Luigi Menabrea</a> and published in a Swiss journal. Scholars believe that <a href="https://www.mhpbooks.com/books/adas-algorithm/">Babbage encouraged her to add notes of her own</a>. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/J7ITqnEmf-g?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Ada Lovelace envisioned in the early 19th century the possibilities of computing.</span></figcaption>
</figure>
<p>In her notes, which ended up twice as long as the original article, Lovelace drew on different areas of her education. Lovelace began by describing how to code instructions onto cards with punched holes, like those used for the <a href="https://www.sciencehistory.org/distillations/the-french-connection">Jacquard weaving loom</a>, a device patented in 1804 that used punch cards to automate weaving patterns in fabric. </p>
<p>Having learned embroidery herself, Lovelace was familiar with the repetitive patterns used for handicrafts. Similarly repetitive steps were needed for mathematical calculations. To avoid duplicating cards for repetitive steps, Lovelace used <a href="https://dl.acm.org/doi/book/10.1145/28095230">loops, nested loops and conditional testing</a> in her program instructions.</p>
<p>The notes included instructions on how to calculate <a href="https://mathworld.wolfram.com/BernoulliNumber.html">Bernoulli numbers</a>, which Lovelace knew from her training to be important in the study of mathematics. Her program showed that the Analytical Engine was capable of performing original calculations that had not yet been performed manually. At the same time, Lovelace noted that the machine could only follow instructions and not “<a href="https://www.simonandschuster.com/books/The-Innovators/Walter-Isaacson/9781476708706">originate anything</a>.”</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/499815/original/file-20221208-7231-ctxrb1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a yellowed sheet of paper with spreadsheet-like lines" src="https://images.theconversation.com/files/499815/original/file-20221208-7231-ctxrb1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/499815/original/file-20221208-7231-ctxrb1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=407&fit=crop&dpr=1 600w, https://images.theconversation.com/files/499815/original/file-20221208-7231-ctxrb1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=407&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/499815/original/file-20221208-7231-ctxrb1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=407&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/499815/original/file-20221208-7231-ctxrb1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=512&fit=crop&dpr=1 754w, https://images.theconversation.com/files/499815/original/file-20221208-7231-ctxrb1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=512&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/499815/original/file-20221208-7231-ctxrb1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=512&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Ada Lovelace created this chart for the individual program steps to calculate Bernoulli numbers.</span>
<span class="attribution"><span class="source">Courtesy of Linda Hall Library of Science, Engineering & Technology</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Finally, Lovelace recognized that the numbers manipulated by the Analytical Engine could be seen as other types of symbols, such as musical notes. An accomplished singer and pianist, Lovelace was familiar with musical notation symbols representing aspects of musical performance such as pitch and duration, and she had manipulated logical symbols in her correspondence with De Morgan. It was not a large step for her to realize that the Analytical Engine could process symbols — not just crunch numbers — and even compose music. </p>
<h2>A well-rounded thinker</h2>
<p>Inventing computer programming was not the first time Lovelace brought her knowledge from different areas to bear on a new subject. For example, as a young girl, she was fascinated with flying machines. Bringing together biology, mechanics and poetry, she asked her mother for anatomical books to study the function of bird wings. She built and experimented with wings, and in her letters, she metaphorically expressed her longing for her mother in the <a href="https://books.google.com/books/about/Ada_the_Enchantress_of_Numbers.html?id=jCKmtAEACAAJ">language of flying</a>. </p>
<p>Despite her talents in logic and math, Lovelace <a href="https://link.springer.com/book/10.1007/978-3-030-78973-2">didn’t pursue a scientific career</a>. She was independently wealthy and never earned money from her scientific pursuits. This was common, however, at a time when freedom – including financial independence – was equated with the <a href="https://press.princeton.edu/books/paperback/9780691178165/leviathan-and-the-air-pump">capability to impartially conduct scientific experiments</a>. In addition, Lovelace devoted just over a year to her only publication, the translation of and notes on Menabrea’s paper about the Analytical Engine. Otherwise, in her life cut short by cancer at age 37, she vacillated between math, music, her mother’s demands, care for her own three children, and eventually a passion for gambling. Lovelace thus may not be an obvious model as a female scientist for girls today.</p>
<p>However, I find Lovelace’s way of drawing on her well-rounded education to solve difficult problems inspirational. True, she lived in an age before scientific specialization. Even Babbage was a <a href="https://theconversation.com/nobel-prizes-most-often-go-to-researchers-who-defy-specialization-winners-are-creative-thinkers-who-synthesize-innovations-from-varied-fields-and-even-hobbies-186193">polymath</a> who worked in mathematical calculation and mechanical innovation. He also published a treatise on industrial manufacturing and another on religious questions of creationism. </p>
<p>But Lovelace applied knowledge from what we today think of as disparate fields in the sciences, arts and the humanities. A well-rounded thinker, she created solutions that were well ahead of her time.</p><img src="https://counter.theconversation.com/content/193930/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Corinna Schlombs does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Lovelace was a prodigious math talent who learned from the giants of her time, but her linguistic and creative abilities were also important in her invention of computer programming.Corinna Schlombs, Associate Professor of History, Rochester Institute of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1841302022-06-09T12:18:24Z2022-06-09T12:18:24ZWhen will I be able to upload my brain to a computer?<figure><img src="https://images.theconversation.com/files/467701/original/file-20220608-24-lhp40e.jpg?ixlib=rb-1.1.0&rect=64%2C21%2C3529%2C2673&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">We don't know how much information the human brain can store.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/mind-processor-series-3d-illustration-human-716357647">agsandrew/Shutterstock</a></span></figcaption></figure><p>READER QUESTION: <em>I am 59 years old, and in reasonably good health. Is it possible that I will live long enough to put my brain into a computer? Richard Dixon.</em> </p>
<p>We often imagine that human consciousness is as simple as input and output of electrical signals within a network of processing units – therefore comparable to a computer. Reality, however, is much more complicated. For starters, we don’t actually know how much information the human brain can hold. </p>
<p>Two years ago, a team at the Allen Institute for Brain Science in Seattle, US, mapped the 3D structure of all the neurons (brain cells) comprised in <a href="https://www.nature.com/articles/d41586-019-02208-0">one cubic millimetre</a> of the brain of a mouse – a milestone considered extraordinary. </p>
<hr>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/313328/original/file-20200203-41485-1foofme.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/313328/original/file-20200203-41485-1foofme.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/313328/original/file-20200203-41485-1foofme.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/313328/original/file-20200203-41485-1foofme.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/313328/original/file-20200203-41485-1foofme.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/313328/original/file-20200203-41485-1foofme.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/313328/original/file-20200203-41485-1foofme.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><strong><em>This article is part of <a href="https://theconversation.com/uk/topics/lifes-big-questions-80040?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=LifesBigQuestionsUK">Life’s Big Questions</a></em></strong>
<br><em>The Conversation’s series, co-published with BBC Future, seeks to answer our readers’ nagging questions about life, love, death and the universe. We work with professional researchers who have dedicated their lives to uncovering new perspectives on the questions that shape our lives.</em></p>
<hr>
<p>Within this minuscule cube of brain tissue, the size of a grain of sand, the researchers counted more than 100,000 neurons and more than a billion connections between them. They managed to record the corresponding information on computers, including the shape and configuration of each neuron and connection, which required two petabytes, or two million gigabytes of storage. And to do this, their automated microscopes had to collect 100 million images of 25,000 slices of the minuscule sample continuously over several months. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/nvXuq9jRWKE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Now if this is what it takes to store the full physical information of neurons and their connections in one cubic millimetre of mouse brain, you can perhaps imagine that the collection of this information from the human brain is not going to be a walk in the park. </p>
<p>Data extraction and storage, however, is not the only challenge. For a computer to resemble the brain’s mode of operation, it would need to access any and all the stored information in a very short amount of time: the information would need to be stored in its <a href="https://www.youtube.com/watch?v=PVad0c2cljo">random access memory (RAM)</a>, rather than on traditional hard disks. But if we tried to store the amount of data the researchers gathered in a computer’s RAM, it would occupy 12.5 times the capacity of the largest <a href="https://www.siliconrepublic.com/machines/biggest-single-memory-computer-ever">single-memory computer</a> (a computer that is built around memory, rather than processing) ever built.</p>
<p>The human brain contains about 100 billion neurons (as many stars as could be counted in the Milky way) – one million times those contained in our cubic millimetre of mouse brain. And the estimated number of connections is a staggering ten to the power of 15. That is ten followed by 15 zeroes – a number comparable to the individual grains contained in a two meter thick layer of sand on a 1km-long beach.</p>
<hr>
<iframe id="noa-web-audio-player" style="border: none" src="https://embed-player.newsoveraudio.com/v4?key=x84olp&id=https://theconversation.com/when-will-i-be-able-to-upload-my-brain-to-a-computer-184130&bgColor=F5F5F5&color=D8352A&playColor=D8352A" width="100%" height="110px"></iframe>
<p><em>You can listen to more articles from The Conversation, narrated by Noa, <a href="https://theconversation.com/us/topics/audio-narrated-99682">here</a>.</em></p>
<hr>
<h2>A question of space</h2>
<p>If we don’t even know how much information storage a human brain can hold, you can imagine how hard it would be to transfer it into a computer. You’d have to first translate the information into a code that the computer can read and use once it is stored. Any error in doing so would probably prove fatal.</p>
<p>A simple rule of information storage is that you need to make sure you have enough space to store all the information you need to transfer before you start. If not, you would have to know exactly the order of importance of the information you are storing and how it is organised, which is far from being the case for brain data. </p>
<p>If you don’t know how much information you need to store when you start, you may run out of space before the transfer is complete, which could mean that the information string may be corrupt or impossible for a computer to use. Also, all data would have to be stored in at least two (if not three) copies, to prevent the <a href="https://www.business2community.com/big-data/the-5-most-disastrous-data-loss-incidents-in-recent-history-0473611">disastrous consequences</a> of potential data loss.</p>
<p>This is only one problem. If you were paying attention when I described the extraordinary achievement of researchers who managed to fully store the 3D structure of the network of neurons in a tiny bit of mouse brain, you will know that this was done from 25,000 (extremely thin) slices of tissue. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/467864/original/file-20220608-12-am4w3z.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Image of organge slices." src="https://images.theconversation.com/files/467864/original/file-20220608-12-am4w3z.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/467864/original/file-20220608-12-am4w3z.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/467864/original/file-20220608-12-am4w3z.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/467864/original/file-20220608-12-am4w3z.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/467864/original/file-20220608-12-am4w3z.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=501&fit=crop&dpr=1 754w, https://images.theconversation.com/files/467864/original/file-20220608-12-am4w3z.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=501&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/467864/original/file-20220608-12-am4w3z.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=501&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">We would have to cut your entire brain in slices one million times thinner than a thin slice of orange.</span>
<span class="attribution"><span class="source">SylviaWrigley/clickr</span></span>
</figcaption>
</figure>
<p>The same technique would have to be applied to your brain, because only very coarse information can be retrieved from brain scans. Information in the brain is stored in every detail of its physical structure of the connections between neurons: their size and shape, as well as the number and location of connections between them. But would you consent to your brain being sliced in that way?</p>
<p>Even if would agree that we slice your brain into extremely thin slices, it is highly unlikely that the full volume of your brain could ever be cut with enough precision and be correctly “reassembled”. The brain of a man has a volume of about 1.26 million cubic millimetres.</p>
<p>If I haven’t already dissuaded you from trying the procedure, consider what happens when taking time into account.</p>
<h2>A question of time</h2>
<p>After we die, our brains quickly <a href="https://theconversation.com/death-how-long-are-we-conscious-for-and-does-life-really-flash-before-our-eyes-177897">undergo major changes</a> that are both chemical and structural. When neurons die they soon lose their ability to communicate, and their structural and functional properties are quickly modified – meaning that they <a href="https://theconversation.com/would-we-want-to-regenerate-brains-of-patients-who-are-clinically-dead-59107">no longer display the properties</a> that they exhibit when we are alive. But even more problematic is the fact that our brain ages. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/death-can-our-final-moment-be-euphoric-129648">Death: can our final moment be euphoric?</a>
</strong>
</em>
</p>
<hr>
<p>From the age of 20, we lose <a href="https://www.biorxiv.org/content/10.1101/029512v1.full">85,000 neurons</a> a day. But don’t worry (too much), we mostly lose neurons that have not found their use, they have not been solicited to get involved in any information processing. This triggers a programme to self-destruction (called apoptosis). In other words, several tens of thousands of our neurons kill themselves every day. Other neurons die because of exhaustion or infection.</p>
<p>This isn’t too much of an issue, though, because we have almost 100 billion neurons at the age of 20, and with such an attrition rate, we have merely lost 2-3% of our neurons by the age of 80. And provided we don’t contract a neurodegenerative disease, our brains can still represent our lifelong thinking style at that age. But what would be the right age to stop, scan and store? </p>
<p>Would you rather store an 80-year-old mind or a 20-year-old one? Attempting the storage of your mind too early would miss a lot of memories and experiences that would have defined you later. But then, attempting the transfer to a computer too late would run the risk of storing a mind with dementia, one that doesn’t quite “work” as well.</p>
<p>So, given that we don’t know how much storage is required, that we cannot hope to find enough time and resources to entirely map the 3D structure of a whole human brain, that we would need to cut you into zillions of minuscule cubes and slices, and that it is essentially impossible to decide when to undertake the transfer, I hope that you are now convinced that it is probably not going to be possible for a good while, if ever. And if it were, you probably would not want to venture in that direction. But in case you’re still tempted, I’ll continue.</p>
<h2>A question of how</h2>
<p>Perhaps the biggest problem we have is that even if we could realise the impossible and jump the many hurdles discussed, we still know very little about underlying mechanisms. Imagine that we have managed to reconstruct the complete structure of the hundred billion neurons in Richard Dixon’s brain along with every one of the connections between them, and have been able to store and transfer this astronomical quantity of data into a computer in three copies. Even if we could access this information on demand and instantaneously, we would still face a great unknown: how does it work?</p>
<p>After the “what” question (what information is there?), and the “when” question (when would be the right time to transfer?), the toughest is the “how” question. Let’s not be too radical. We do know some things. We know that neurons communicate with one another based on local electrical changes, which travel down their main extensions (dendrites and axons). These can transfer from one neuron to another directly or via exchange surfaces call synapses. </p>
<p>At the synapse, electrical signals are converted to chemical signals, which can activate or deactivate the next neuron in line, depending on the kind of molecule (called neuromediators) involved. We understand a great deal of the principles governing such transfers of information, but we can’t decipher it from looking at the structure of neurons and their connections. </p>
<p>To know which types of connection apply between two neurons, we need to apply molecular techniques and genetic tests. This means again fixating and cutting the tissue in thin slices. It also often involves dying techniques, and the cutting needs to be compatible with those. But this is not necessarily compatible with the cutting needed to reconstruct the 3D structure. </p>
<figure class="align-center ">
<img alt="Image of different types of synapses in a mouse brain slice." src="https://images.theconversation.com/files/467705/original/file-20220608-13-1y9g1b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/467705/original/file-20220608-13-1y9g1b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/467705/original/file-20220608-13-1y9g1b.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/467705/original/file-20220608-13-1y9g1b.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/467705/original/file-20220608-13-1y9g1b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/467705/original/file-20220608-13-1y9g1b.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/467705/original/file-20220608-13-1y9g1b.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Different types of synapses in a mouse brain slice.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/zeissmicro/15662421264/in/photolist-pS31aW-2eYmNme-2mWnXfz-9Ux14-Q1QU-TABuf5-ftDpff-QGr6u3-ioaos3-9Ux16-9Ux1d-9Ux1n-9Ux1s-577km2-8KgsU3-267G4ku-5J4uci-dk2UXn-LSBcBE-FwvJx-RUnRZ7-AUyPrC-WEPEUL-gEVDf-UTD831-x25c1-MEBM2t-x7Wrr8-q6hWLM-QSy5V7-MuM6ot-q6a8ZC-oyF1Pk-wwCpes-633Rt8-oGwEYC-dKgNM-5gE7Jh-WgndxX-c9rtHW-wwCtkd-7d86s6-dKCa5-7eF4GS-pPnQC-mQiJBL-66zTgT-x3QLkc-adqeWH-adt5db">ZEISS Microscopy/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>So now you are faced with a choice even more daunting than determining when is the best time in your life to forego existence, you have to chose between structure and function – the three-dimensional architecture of your brain versus how it operates at a cellular level. That’s because there is no known method for collecting both types of information at the same time. And by the way, not that I would like to inflate an already serious drama, but how neurons communicate is yet another layer of information, meaning that we need much more memory than the incalculable quantity previously envisaged.</p>
<p>So the possibility of uploading the information contained in brains to computers is utterly remote and might forever be out of reach. Perhaps, I should stop there, but I won’t. Because there is more to say. Allow me to ask you a question in return, Richard: why would you want to put your brain into a computer? </p>
<h2>Are our minds more than the sum of their (biological) parts?</h2>
<p>I may have a useful, albeit unexpected, answer to give you after all. I shall assume that you would want to transfer your mind to a computer in the hope of existing beyond your lifespan, that you’d like to continue existing inside a machine once your body can no longer implement your mind in your living brain. </p>
<p>If this hypothesis is correct, however, I must object. Imagining that all the impossible things listed above were one day resolved and your brain could literally be “copied” into a computer – allowing a complete simulation of the functioning of your brain – at the moment you decide to transfer, Richard Dixon would have ceased to exist. The mind image transferred to the computer would therefore not be any more alive than the computer hosting it. </p>
<p>That’s because living things such as humans and animals exist because they are alive. You may think that I just stated something utterly trivial, verging on stupidity, but if you think about it there is more to it than meets the eye. A living mind receives input from the world through the senses. It is attached to a body that feels based on physical sensations. This results in physical manifestations such as changes in heart rate, breathing and sweating, which in turn can be felt and contribute to the inner experience. How would this work for a computer without a body?</p>
<p>All such input and output isn’t likely to be easy to model, especially if the copied mind is isolated and there is no system to sense the environment and act in response to input. The brain seamlessly and constantly integrates signals from all the senses to produce internal representations, makes predictions about these representations, and ultimately creates conscious awareness (our feeling of being alive and being ourselves) in a way that is still a total mystery to us.</p>
<p>Without interaction with the world, however subtle and unconscious, how could the mind function even for a minute? And how could it evolve and change? If the mind, artificial or not, has no input or output, then it is devoid of life, just like a dead brain.</p>
<figure class="align-center ">
<img alt="Image of an android robot thinking on white background." src="https://images.theconversation.com/files/467702/original/file-20220608-20-r3y9ef.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/467702/original/file-20220608-20-r3y9ef.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=429&fit=crop&dpr=1 600w, https://images.theconversation.com/files/467702/original/file-20220608-20-r3y9ef.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=429&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/467702/original/file-20220608-20-r3y9ef.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=429&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/467702/original/file-20220608-20-r3y9ef.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=539&fit=crop&dpr=1 754w, https://images.theconversation.com/files/467702/original/file-20220608-20-r3y9ef.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=539&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/467702/original/file-20220608-20-r3y9ef.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=539&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">I can’t think and thus I am not.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/3d-rendering-android-robot-thinking-on-627595265">Phonlamai Photo/Shutterstock</a></span>
</figcaption>
</figure>
<p>In other words, having made all the sacrifices discussed earlier, transferring your brain to a computer would have completely failed to keep your mind alive. You may reply that you would then request an upgrade and ask for your mind to be transferred into a sophisticated robot equipped with an array of sensors capable to seeing, hearing, touching, and even smelling and tasting the world (why not?) and that this robot would be able to act and move, and speak (why not?). </p>
<p>But even then, it is theoretically and practically impossible that the required sensors and motor systems would provide sensations and produce actions that are identical or even comparable to those provided and produced by your current biological body. Eyes are not simple cameras, ears aren’t just microphones and touch is not only about pressure estimation. For instance, eyes don’t only convey light contrasts and colours, the information from them is combined soon after it reaches the brain in order to encode depth (distance between objects) – and we don’t yet know how.</p>
<p>And so it follows that your transferred mind would not have the possibility to relate to the world as your current living mind does. And how would we even go about connecting artificial sensors to the digital copy of your (living) mind? What about the danger of hacking? Or hardware failure?</p>
<p>So no, no and no. I have tried to give you my (scientifically grounded) take on your question and even though it is a definite no from me, I hope to have helped alleviate your desire to ever have your brain put into a computer. </p>
<p>I wish you a long and healthy life, Richard, because that definitely is where your mind will exist and thrive for as long as it is implemented by your brain. May it bring you joy and dreams – something androids will never have.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-like-hal-9000-can-never-exist-because-real-emotions-arent-programmable-94141">AI like HAL 9000 can never exist because real emotions aren't programmable</a>
</strong>
</em>
</p>
<hr>
<hr>
<p><em>To get all of life’s big answers, join the hundreds of thousands of people who value evidence-based news by <a href="https://theconversation.com/uk/newsletters/the-daily-newsletter-2?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=LifesBigQuestionsUK"><strong>subscribing to our newsletter</strong></a>. You can send us your big questions by email at <a href="mailto:bigquestions@theconversation.com">bigquestions@theconversation.com</a> and we’ll try to get a researcher or expert on the case.</em></p>
<p><em>More <a href="https://theconversation.com/uk/topics/lifes-big-questions-80040?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=LifesBigQuestionsUK">Life’s Big Questions</a>:</em></p>
<ul>
<li><p><em><a href="https://theconversation.com/happiness-is-feeling-content-more-important-than-purpose-and-goals-131503?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=LifesBigQuestionsUK">Happiness: is contentment more important than purpose and goals?</a></em></p></li>
<li><p><em><a href="https://theconversation.com/could-we-live-in-a-world-without-rules-128664?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=LifesBigQuestionsUK">Could we live in a world without rules?</a></em></p></li>
<li><p><em><a href="https://theconversation.com/death-can-our-final-moment-be-euphoric-129648?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=LifesBigQuestionsUK">Death: can our final moment be euphoric?</a></em></p></li>
<li><p><em><a href="https://theconversation.com/are-humans-still-part-of-nature-or-is-it-now-just-our-dominion-128790?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=LifesBigQuestionsUK">Nature: have humans now evolved beyond the natural world, and do we still need it?</a></em></p></li>
<li><p><em><a href="https://theconversation.com/love-is-it-just-a-fleeting-high-fuelled-by-brain-chemicals-129201?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=LifesBigQuestionsUK">Love: is it just a fleeting high fuelled by brain chemicals?</a></em></p></li>
</ul><img src="https://counter.theconversation.com/content/184130/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Guillaume Thierry does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>To capture the information that a brain contains, you need to cut it into billions and billions of slices.Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1818292022-06-01T12:44:03Z2022-06-01T12:44:03ZWhat are digital twins? A pair of computer modeling experts explain<figure><img src="https://images.theconversation.com/files/465820/original/file-20220527-25-pntn83.jpg?ixlib=rb-1.1.0&rect=0%2C17%2C5934%2C5063&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A digital twin attempts to capture every aspect of a real thing, including up-to-the-moment changes.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/analog-collage-with-female-portrait-and-her-mirror-royalty-free-image/1309294833">lambada/E+ via Getty Images</a></span></figcaption></figure><p>A digital twin is a virtual representation of a real system – a building, the power grid, a city, even a human being – that mimics the characteristics of the system. A digital twin is more than just a computer model, however. It receives data from sensors in the real system to constantly parallel the system’s state.</p>
<p>A digital twin helps people analyze and predict a system’s behavior under different conditions. The systems being twinned are typically <a href="https://doi.org/10.1109/PerCom53586.2022.9762405">very complex and require significant effort to model and track</a>.</p>
<p>Digital twins are useful in a wide variety of domains, including <a href="https://doi.org/10.1007/s11036-020-01557-9">supply chains</a>, <a href="http://dx.doi.org/10.3233/FAIA190139">health care</a>, <a href="https://www.ashrae.org/File%20Library/Conferences/Specialty%20Conferences/2018%20Building%20Performance%20Analysis%20Conference%20and%20SimBuild/Papers/C110.pdf">buildings</a>, <a href="https://www.its.ucla.edu/project/digital-twins-for-bridge-health-monitoring-management/">bridges</a>, <a href="https://www.uni-stuttgart.de/en/university/news/all/Digital-twin-for-autonomous-driving/">self-driving cars</a> and <a href="https://futureofretail.io/trends/digital-twins">retail customer personas</a> to improve efficiency and reliability. For example, a warehouse operator can optimize a warehouse’s performance by exploring the response of its digital twin to various material handling policies and equipment without incurring the cost of making actual changes. </p>
<p>Even a wildfire can be <a href="https://doi.org/10.1109/ICUFN.2019.8806107">represented by a digital twin</a>. Government agencies can predict the spread of the fire and its impact under different conditions such as wind velocity, humidity and proximity to habitats, and use this information to guide evacuations.</p>
<h2>Why digital twins matter</h2>
<p>Digital twins are often used to model, understand and analyze complex systems where performance, reliability and security of the system are critical. In such systems it is paramount to test any changes, whether planned or unplanned. </p>
<p>In order to accurately test changes to the state of the actual system and the effects of any possible stimulus, the digital twin must accurately represent the physical system in its current state. This requires the digital twin to receive continuous updates from the physical system via fast and reliable communications channels. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/HftDI09LVI0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Digital twins are a key part of the push to create “smart” cities.</span></figcaption>
</figure>
<p>Creating and maintaining digital twins often involves vast amounts of data to represent various features of the real system. Collecting and processing this data requires advanced communication and computing technologies. Communication support typically involves high-speed internet connections and wireless networks such as Wi-Fi and 5G. Computational support is typically in the form of servers, either in the cloud or closer to the physical system. </p>
<p>We and other faculty members at Rochester Institute of Technology and the University of California, Irvine are starting the <a href="https://www.rit.edu/cssr">Center for Smart Spaces Research</a>, a research center sponsored by the National Science Foundation. One of the primary ongoing projects within this center is building the basic technologies for creating digital twins in a variety of applications. </p>
<p><em>Read other short, accessible explanations of newsworthy subjects written by academics in their areas of expertise for The Conversation U.S. <a href="https://theconversation.com/us/topics/significant-terms-105996">here</a>.</em></p><img src="https://counter.theconversation.com/content/181829/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Amlan Ganguly receives funding from US NSF, DARPA, AFRL, Raymond Corp and Bryx Corp. </span></em></p><p class="fine-print"><em><span>Nalini Venkatasubramanian receives research funding from the National Science Foundation and other federal agencies </span></em></p>A digital twin is to a computer model as live video is to a still photo. These virtual replicas can be used to understand and make predictions about a wide range of complex systems, including people.Amlan Ganguly, Associate Professor of Computer Engineering, Rochester Institute of TechnologyNalini Venkatasubramanian, Professor of Computer Science, University of California, IrvineLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1780742022-03-20T09:28:15Z2022-03-20T09:28:15ZA computer science technique could help gauge when the pandemic is ‘over’<figure><img src="https://images.theconversation.com/files/452518/original/file-20220316-7998-1x93dbq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The world wants the pandemic to end and life to return to normal. When will that happen?</span> <span class="attribution"><span class="source">Marc Fernandes/NurPhoto via Getty Images</span></span></figcaption></figure><p>In early 2022, nearly two years after Covid was declared a pandemic by the World Health Organization, experts are <a href="https://www.science.org/content/article/when-pandemic-over">mulling a big question</a>: when is a pandemic “over”? </p>
<p>So, what’s the answer? What criteria should be used to determine the “end” of Covid’s pandemic phase? These are deceptively simple questions and there are no easy answers.</p>
<p>I am a computer scientist who <a href="https://scholar.google.com/citations?hl=en&user=lccln9YAAAAJ&view_op=list_works&sortby=pubdate">investigates</a> the development of ontologies. In computing, ontologies are a means to formally structure knowledge of a subject domain, with its entities, relations and constraints, so that a computer can process it in various applications and help humans to be more precise.</p>
<p>Ontologies can discover knowledge that’s been overlooked until now: in <a href="https://academic.oup.com/bioinformatics/article/22/14/e530/227867">one instance</a>, an ontology identified two additional functional domains in phosphatases (a group of enzymes) and a novel domain architecture of a part of the enzyme. Ontologies also underlie <a href="https://blog.google/products/search/introducing-knowledge-graph-things-not/">Google’s Knowledge Graph</a> that’s behind those knowledge panels on the right-hand side of a search result.</p>
<p>Applying ontologies to the questions I posed at the start is useful. This approach helps to clarify why it is difficult to specify a cut-off point at which a pandemic can be declared “over”. The process involves collecting definitions and characterisations from domain experts, like epidemiologists and infectious disease scientists, consulting relevant research and other ontologies and investigating the nature of what entity “X” is. </p>
<p>“X”, here, would be the pandemic itself – not a mere shorthand definition, but looking into the properties of that entity. Such a precise characterisation of the “X” will also reveal when an entity is “not an X”. For instance, if X = house, a property of houses is that they all must have a roof; if some object doesn’t have a roof, it definitely isn’t a house.</p>
<p>With those characteristics in hand, a precise, formal specification can be formulated, aided by additional methods and tools. From that, the what or when of “X” – the pandemic is over or it is not – would logically follow. If it doesn’t, at least it will be possible to explain why things are not that straightforward. </p>
<p>This sort of precision complements health experts’ efforts, helping humans to be more precise and communicate more precisely. It forces us to make implicit assumptions explicit and clarifies where disagreements may be. </p>
<h2>Definitions and diagrams</h2>
<p>I <a href="https://keet.wordpress.com/2022/01/26/what-is-a-pandemic-ontologically/">conducted an ontological analysis</a> of “pandemic”. First, I needed to find definitions of a pandemic. </p>
<p>Informally, an epidemic is an occurrence during which there are multiple instances of an infectious disease in organisms, for a limited duration of time, that affects a community of said organisms living in some region. A pandemic, as a minimum, extends the region where the infections take place. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/when-will-the-covid-19-pandemic-end-4-essential-reads-on-past-pandemics-and-what-the-future-could-bring-175587">When will the COVID-19 pandemic end? 4 essential reads on past pandemics and what the future could bring</a>
</strong>
</em>
</p>
<hr>
<p>Next, I drew from an existing foundational ontologies. This contains generic categories like “object”, “process”, and “quality”. I also used domain ontologies, which contain entities specific to a subject domain, like infectious diseases. Among other resources, I consulted the <a href="https://doi.org/10.1007/978-1-4419-1327-2_19">Infectious Disease Ontology</a> and the <a href="http://wonderweb.man.ac.uk/deliverables/documents/D18.pdf">Descriptive Ontology for Linguistic and Cognitive Engineering</a>.</p>
<p>First, I aligned “pandemic” to a foundational ontology, using a <a href="https://dl.acm.org/doi/10.1145/2505515.2505539">decision diagram</a> to simplify the process. This helped to work out what kind of <a href="https://people.cs.uct.ac.za/%7Emkeet/files/OEbook.pdf#page=145">thing and generic category</a> “pandemic” is:</p>
<p>(1) Is [pandemic] something that is happening or occurring? Yes (perdurant, i.e., something that unfolds in time, rather than be wholly present). </p>
<p>(2) Are you able to be present or participate in [a pandemic]? Yes (event). </p>
<p>(3) Is [a pandemic] atomic, i.e., has no subdivisions and has a definite end point? No (accomplishment). </p>
<p>The word “accomplishment” may seem strange here. But, in this context, it makes clear that a pandemic is a <a href="https://doi.org/10.1007/978-3-319-69904-2_33">temporal entity</a> with a limited lifespan and will evolve – that is, <a href="http://ceur-ws.org/Vol-2050/CREOL_paper_1.pdf">cease to be a pandemic and evolve back to epidemic</a>, as indicated in this diagram. </p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/452458/original/file-20220316-25-s1jqfd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/452458/original/file-20220316-25-s1jqfd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=440&fit=crop&dpr=1 600w, https://images.theconversation.com/files/452458/original/file-20220316-25-s1jqfd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=440&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/452458/original/file-20220316-25-s1jqfd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=440&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/452458/original/file-20220316-25-s1jqfd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=553&fit=crop&dpr=1 754w, https://images.theconversation.com/files/452458/original/file-20220316-25-s1jqfd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=553&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/452458/original/file-20220316-25-s1jqfd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=553&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Maria Keet</span></span>
</figcaption>
</figure>
<h2>Characteristics</h2>
<p>Next, I examined a pandemic’s characteristics described in the literature. A comprehensive list is described in <a href="https://academic.oup.com/jid/article/200/7/1018/903237">a paper</a> by US infectious disease specialists published in 2009 during the global H1N1 influenza virus outbreak. They collated eight characteristics of a pandemic.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/new-covid-data-south-africa-has-arrived-at-the-recovery-stage-of-the-pandemic-177933">New COVID data: South Africa has arrived at the recovery stage of the pandemic</a>
</strong>
</em>
</p>
<hr>
<p>I listed them and assessed them from an ontological perspective:</p>
<ol>
<li><p>Wide geographic extension. This is an imprecise feature – be it <a href="https://towardsdatascience.com/a-very-brief-introduction-to-fuzzy-logic-and-fuzzy-systems-d68d14b3a3b8?gi=31f44d216a95">fuzzy</a> in the mathematical sense or estimated by other means: there isn’t a crisp threshold when “wide” starts or ends.</p></li>
<li><p>Disease movement: there’s transmission from place to place and that can be traced. A yes/no characteristic, but it could be made categorical or with ranges of how slowly or fast it moves.</p></li>
<li><p>High attack rates and explosiveness, or: many people are affected in a short timespan. Many, short, fast – all indicate imprecision.</p></li>
<li><p>Minimal population immunity: immunity is relative. You have it to a degree to some or all of the variants of the infectious agent, and likewise for the population. This is an inherently fuzzy feature.</p></li>
<li><p>Novelty: A yes/no feature, but one could add “partial”.</p></li>
<li><p>Infectiousness: it must be infectious (excluding non-infectious things, like obesity), so a clear yes/no.</p></li>
<li><p>Contagiousness: this may be from person to person or through some other medium. This property includes human-to-human, human-animal intermediary (e.g., fleas, rats), and human-environment (notably: water, as with cholera), and their attendant aspects.</p></li>
<li><p>Severity: Historically, the term “pandemic” has been applied more often for severe diseases or those with high fatality rates (e.g., HIV/AIDS) than for milder ones. This has some subjectivity, and thus may be fuzzy.</p></li>
</ol>
<p>Properties with imprecise boundaries annoy epidemiologists because they may lead to <a href="https://www.nature.com/articles/s41598-021-81814-3">different outcomes of their prediction models</a>. But from my ontologist’s viewpoint, we’re getting somewhere with these properties. From the computational side, <a href="https://www.sciencedirect.com/science/article/abs/pii/S095741741100978X">automated reasoning with fuzzy features</a> is possible. </p>
<p>COVID, at least early in 2020, easily ticked all eight boxes. A suitably automated reasoner would have classified that situation as a pandemic. But now, in early 2022? Severity (point 8) has largely decreased and immunity (point 4) has risen. Point 5 – are there worse variants of concern to come – is the million-dollar question. More ontological analysis is needed.</p>
<h2>Highlighting the difficulties</h2>
<p>Ontologically speaking, then, a pandemic is an event (“accomplishment”) that unfolds in time. To be classified as a pandemic, there are a number of features that aren’t all crisp and for which the imprecise boundaries haven’t all been set. Conversely, it implies that classifying the event as “not a pandemic” is just as imprecise. </p>
<p>This isn’t a full answer as to what a pandemic is ontologically, but it does shed light on the difficulties of calling it “over” – and illustrates well that there will be disagreement about it.</p><img src="https://counter.theconversation.com/content/178074/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Maria Keet does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>This sort of precision complements health experts’ efforts, helping humans to be more precise and communicate more precisely.Maria Keet, Associate professor in Computer Science, University of Cape TownLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1749782022-01-17T14:21:51Z2022-01-17T14:21:51ZHow to be a god: we might one day create virtual worlds with characters as intelligent as ourselves<figure><img src="https://images.theconversation.com/files/441057/original/file-20220117-13-11tpnul.jpg?ixlib=rb-1.1.0&rect=51%2C0%2C4960%2C3467&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Virtual character may soon be smarter than us.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/rendering-virtual-world-116473258">Michelangelus/Shutterstock</a></span></figcaption></figure><p>Most research into the ethics of Artificial Intelligence (AI) concerns its use for <a href="https://www.pgaction.org/declaration-support-treaty-prohibition-faw.html">weaponry</a>, <a href="https://theconversation.com/the-self-driving-trolley-problem-how-will-future-ai-systems-make-the-most-ethical-choices-for-all-of-us-170961">transport</a> or <a href="https://theconversation.com/our-casual-use-of-facial-analysis-tools-can-lead-to-more-sinister-applications-172595">profiling</a>. Although the dangers presented by an autonomous, racist tank cannot be understated, there is another aspect to all this. What about our responsibilities to the AIs we create?</p>
<p><a href="https://theconversation.com/gamer-disclaimer-virtual-worlds-can-be-as-fulfilling-as-real-life-29571">Massively-multiplayer online role-playing games</a> (such as World of Warcraft) are pocket realities populated chiefly by non-player characters. At the moment, these characters are not particularly smart, but give it 50 years and they will be.</p>
<p>Sorry? 50 years won’t be enough? Take 500. Take 5,000,000. We have the rest of eternity to achieve this. </p>
<p>You want planet-sized computers? You can have them. You want computers made from human brain tissue? You can have them. Eventually, I believe we <em>will</em> have virtual worlds containing characters as smart as we are – if not smarter – and in full possession of free will. What will our responsibilities towards these beings be? We will after all be the literal gods of the realities in which they dwell, controlling the physics of their worlds. We can do anything we like to them.</p>
<p>So knowing all that…should we?</p>
<h2>Ethical difficulties of free will</h2>
<p>As I’ve explored in <a href="https://mud.co.uk/richard/How%20to%20Be%20a%20God.pdf">my recent book</a>, whenever “should” is involved, ethics steps in and takes over – <a href="https://mitpress.mit.edu/books/ethics-computer-games">even for video games</a>. The first question to ask is whether our game characters of the future are worthy of being considered as moral entities or are simply bits in a database. If the latter, we needn’t trouble our consciences with them any more than we would characters in a word processor.</p>
<p>The question is actually moot, though. If we create our characters to <em>be</em> free-thinking beings, then we must treat them as if they <em>are</em> such – regardless of how they might appear to an external observer.</p>
<p>That being the case, then, can we switch our virtual worlds off? Doing so could be condemning billions of intelligent creatures to non-existence. Would it nevertheless be OK if we saved a copy of their world at the moment we ended it? Does the theoretical possibility that we may switch their world back on exactly as it was mean we’re not <em>actually</em> murdering them? What if we <a href="https://theconversation.com/act-now-to-preserve-our-disappearing-videogame-culture-or-its-game-over-65922">don’t have the original game software</a>?</p>
<p>Can we legitimately cause these characters suffering? We ourselves implement the very concept, so this isn’t so much a question about whether it’s OK to torment them as it is about whether tormenting them is even a thing. In modern societies, the default position is that it’s immoral to make free-thinking individuals suffer unless either they agree to it or it’s to save them (or someone else) from something worse. We can’t ask our characters to consent to be born into a world of suffering – they won’t exist when we create the game. </p>
<p>So, what about the “something worse” alternative? If you possess free will, you must be sapient, so must therefore be a moral being yourself. That means you must have <em>developed</em> morals, so it must be possible for bad things to happen to you. Otherwise, you couldn’t have reflected on what’s right or wrong to develop your morals. Put another way, unless bad things happen, there’s no free will. Removing free will from a being is tantamount to destroying the being it was previously, therefore yes, we do have to allow suffering or the concept of sapient character is an oxymoron.</p>
<hr>
<p><a href="https://open.spotify.com/episode/0X0XKrPcYCTwlNGDN4yvdF?si=zD5CURAcQT6ISi2jxovFFw&t=1816&context=spotify%3Ashow%3A14O3EsEGWQ4mK3XpKzsncP"><img src="https://images.theconversation.com/files/441125/original/file-20220117-25-twiq63.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=212&fit=crop&dpr=1" alt="Promotional image for podcast" width="100%"></a>
<br>
<em>Find other ways to listen to <a href="https://theconversation.com/crypto-countries-nigeria-and-el-salvadors-opposing-journeys-into-digital-currencies-podcast-174813">The Conversation Weekly podcast</a> here.</em></p>
<hr>
<h2>Afterlife?</h2>
<p>Accepting that our characters of the future are free-thinking beings, where would they fit in a hierarchy of importance? In general, given a straight choice between saving a sapient being (such as a toddler) or a merely sentient one (such as a dog), people would choose the former over the latter. Given a similar choice between saving a real dog or a virtual saint, which would prevail?</p>
<p>Bear in mind that if your characters perceive themselves to be moral beings but you don’t perceive them as such, they’re going to think you’re a jerk. As <a href="https://finalfantasy.fandom.com/wiki/Alphinaud_Leveilleur">Alphinaud Leveilleur</a>, a character in <em>Final Fantasy XIV</em>, neatly puts it (spoiler: having just discovered that his world was created by the actions of beings who as a consequence don’t regard him as properly alive): “<em>We</em> define our worth, not the circumstances of our creation!”.</p>
<figure class="align-center ">
<img alt="Image of a person playing World of Warcraft." src="https://images.theconversation.com/files/441053/original/file-20220117-23-v73f18.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4344%2C2893&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/441053/original/file-20220117-23-v73f18.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/441053/original/file-20220117-23-v73f18.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/441053/original/file-20220117-23-v73f18.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/441053/original/file-20220117-23-v73f18.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/441053/original/file-20220117-23-v73f18.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/441053/original/file-20220117-23-v73f18.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">World of Warcraft is massively-multiplayer online role-playing game.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/wroclaw-poland-september-04th-2018-woman-1185697654">Daniel Krason/Shutterstock</a></span>
</figcaption>
</figure>
<p>Are we going to allow our characters to die? It’s extra work to implement the concept. If they do live forever, do we make them invulnerable or merely stop them from dying? Life wouldn’t be much fun after falling into a blender, after all. If they do die, do we move them to gaming heaven (or hell) or simply erase them?</p>
<p>These aren’t the only questions we can ask. Can we insert ideas into their heads? Can we change their world to mess with them? Do we impose our morals on them or let them develop their own (with which we may disagree)? There are many more.</p>
<p>Ultimately, the biggest question is: should we create sapient characters in the first place?</p>
<p>Now you’ll have noticed that I’ve asked a lot of questions here. You may well be wondering what the answers are.</p>
<p>Well, so am I! That’s the point of this exercise. Humanity doesn’t yet have an ethical framework for the creation of realities of which we are gods. No system of meta-ethics yet exists to help us. We need to work this out <em>before</em> we build worlds populated by beings with free will, whether 50, 500, 5,000,000 years from now or tomorrow. These are questions for <em>you</em> to answer.</p>
<p>Be careful how you do so, though. You may set a precedent.</p>
<p>We ourselves are the non-player characters of Reality.</p><img src="https://counter.theconversation.com/content/174978/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Richard A. Bartle is affiliated with Humanists UK. </span></em></p>If virtual characters can be as smart as humans, having free will, can we kill or harm them?Richard A. Bartle, Professor of Computer Game Design, University of EssexLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1728502021-12-01T13:36:14Z2021-12-01T13:36:14ZHow the US census led to the first data processing company 125 years ago – and kick-started America’s computing industry<figure><img src="https://images.theconversation.com/files/434761/original/file-20211130-27-1uk0tsc.jpg?ixlib=rb-1.1.0&rect=0%2C7%2C2394%2C2307&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">This electromechanical machine, used in the 1890 U.S. census, was the first automated data processing system.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/niallkennedy/6414584">Niall Kennedy/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>The U.S. Constitution requires that a population count be conducted at the beginning of every decade. </p>
<p>This census has always been charged with political significance, and continues to be. That’s clear from <a href="https://www.cnn.com/2020/09/09/politics/census-challenges/index.html">the controversies in the run-up to the 2020 census</a>. </p>
<p>But it’s less widely known how important the census has been in developing the U.S. computer industry, a story that I tell in my book, “<a href="https://jhupbooks.press.jhu.edu/title/republic-numbers">Republic of Numbers: Unexpected Stories of Mathematical Americans through History</a>.” That history includes the founding of the first automated data processing company, the <a href="https://www.smithsonianmag.com/smithsonian-institution/herman-holleriths-tabulating-machine-2504989/">Tabulating Machine Company</a>, 125 years ago on December 3, 1896.</p>
<h2>Population growth</h2>
<p>The only use of the census clearly specified in the Constitution is to allocate seats in the House of Representatives. More populous states get more seats. </p>
<p>A minimalist interpretation of the census mission would require reporting only the overall population of each state. But the census has never confined itself to this.</p>
<p>A complicating factor emerged right at the beginning, with the Constitution’s distinction between “free persons” and “<a href="http://www.digitalhistory.uh.edu/disp_textbook.cfm?smtID=3&psid=163">three-fifths of all other persons</a>.” This was the Founding Fathers’ infamous mealy-mouthed compromise between those states with a large number of enslaved persons and those states where relatively few lived. </p>
<p><a href="https://www.census.gov/history/www/through_the_decades/index_of_questions/1790_1.html">The first census</a>, in 1790, also made nonconstitutionally mandated distinctions by age and sex. In subsequent decades, many other personal attributes were probed as well: occupational status, marital status, educational status, place of birth and so on.</p>
<p>As the country grew, each census required greater effort than the last, not merely to collect the data but also to compile it into usable form. <a href="https://www.jstor.org/stable/24987147?seq=1#page_scan_tab_contents">The processing of the 1880 census</a> was not completed until 1888. </p>
<p>It had become a mind-numbingly boring, error-prone, clerical exercise of a magnitude rarely seen. </p>
<p>Since the population was evidently continuing to grow at a rapid pace, those with sufficient imagination could foresee that processing the 1890 census would be gruesome indeed without some change in procedure. </p>
<p><iframe id="1Onyi" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/1Onyi/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<h2>A new invention</h2>
<p>John Shaw Billings, a physician assigned to assist the Census Office with compiling health statistics, had closely observed the immense tabulation efforts required to deal with the raw data of 1880. He expressed his concerns to a young mechanical engineer assisting with the census, Herman Hollerith, a recent graduate of the Columbia School of Mines. </p>
<p>On Sept. 23, 1884, the U.S. Patent Office recorded a submission from the 24-year-old Hollerith, titled “<a href="https://pdfpiw.uspto.gov/.piw?PageNum=0&docid=00395782&IDKey=73D9506C5930%0D%0A&HomeUrl=http%3A%2F%2Fpatft.uspto.gov%2Fnetacgi%2Fnph-Parser%3FSect1%3DPTO1%2526Sect2%3DHITOFF%2526d%3DPALL%2526p%3D1%2526u%3D%25252Fnetahtml%25252FPTO%25252Fsrchnum.htm%2526r%3D1%2526f%3DG%2526l%3D50%2526s1%3D0395782.PN.%2526OS%3DPN%2F0395782%2526RS%3DPN%2F0395782">Art of Compiling Statistics</a>.”</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="an old black and white photograph showing a man seated at a wooden desk-like machine looking at a bank of indicator dials" src="https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=709&fit=crop&dpr=1 600w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=709&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=709&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=891&fit=crop&dpr=1 754w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=891&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=891&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Hollerith electric tabulating machine in use in 1902.</span>
<span class="attribution"><a class="source" href="https://www.census.gov/history/img/1902_Hollerith_electric_tabulating_machine.jpg">United States Census Bureau</a></span>
</figcaption>
</figure>
<p>By progressively improving the ideas of this initial submission, Hollerith would decisively win an 1889 competition to improve the processing of the 1890 census. </p>
<p>The <a href="https://www.census.gov/history/www/innovations/technology/the_hollerith_tabulator.html">technological solutions</a> devised by Hollerith involved a suite of mechanical and electrical devices. The first crucial innovation was to translate data on handwritten census tally sheets to patterns of holes punched in cards. As Hollerith phrased it, in the 1889 revision of his patent application,</p>
<blockquote>
<p>“A hole is thus punched corresponding to person, then a hole according as person is a male or female, another recording whether native or foreign born, another either white or colored, &c.”</p>
</blockquote>
<p>This process required developing special machinery to ensure that holes could be punched with accuracy and efficiency. </p>
<p>Hollerith then devised a machine to “read” the card, by probing the card with pins, so that only where there was a hole would the pin pass through the card to make an electrical connection, resulting in advance of the appropriate counter. </p>
<p>For example, if a card for a white male farmer passed through the machine, a counter for each of these categories would be increased by one. The card was made sturdy enough to allow passage through the card reading machine multiple times, for counting different categories or checking results.</p>
<p>The count proceeded so rapidly that the <a href="https://play.google.com/books/reader?id=MGZqAAAAMAAJ&pg=GBS.PA1">state-by-state numbers needed for congressional apportionment</a> were certified before the end of November 1890. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=470&fit=crop&dpr=1 600w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=470&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=470&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=590&fit=crop&dpr=1 754w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=590&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=590&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This ‘mechanical punch card sorter’ was used for the 1950 census.</span>
<span class="attribution"><a class="source" href="https://www.census.gov/library/photos/machinists_technicians_5.html">U.S. Census Bureau</a></span>
</figcaption>
</figure>
<h2>Rise of the punched card</h2>
<p>After his census success, <a href="https://www.worldcat.org/title/computer-a-history-of-the-information-machine/oclc/1110437971?referer=br&ht=edition">Hollerith went into business selling this technology</a>. The company he founded, the Tabulating Machine Company, would, after he retired, become International Business Machines - IBM. IBM led the way in perfecting card technology for recording and tabulating large sets of data for a variety of purposes. </p>
<p>By the 1930s, many businesses were using cards for record-keeping procedures, such as payroll and inventory. Some data-intensive scientists, especially astronomers, were also finding the cards convenient. IBM had by then standardized an 80-column card and had developed keypunch machines that would change little for decades. </p>
<p>Card processing became one leg of the mighty computer industry that blossomed after World War II, and IBM for a time would be the third-largest corporation in the world. Card processing served as a scaffolding for vastly more rapid and space-efficient purely electronic computers that now dominate, with little evidence remaining of the old regime. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1334&fit=crop&dpr=1 600w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1334&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1334&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1676&fit=crop&dpr=1 754w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1676&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1676&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A blue IBM punch card.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Blue-punch-card-front.png">Gwern/Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>Those who have grown up knowing computers only as easily portable devices, to be communicated with by the touch of a finger or even by voice, may be unfamiliar with the room-size computers of the 1950s and ’60s, where the primary means of loading data and instructions was by creating a deck of cards at a keypunch machine, and then feeding that deck into a card reader. This persisted as the default procedure for many computers well into the 1980s. </p>
<p><a href="https://www.worldcat.org/title/grace-hopper-navy-admiral-and-computer-pioneer/oclc/19516564&referer=brief_results">As computer pioneer Grace Murray Hopper recalled</a> about her early career, “Back in those days, everybody was using punched cards, and they thought they’d use punched cards forever.”</p>
<p>Hopper had been an important member of the team that created the first commercially viable general-purpose computer, the Universal Automatic Computer, or UNIVAC, one of the card-reading behemoths. Appropriately enough, the first UNIVAC delivered, in 1951, was to the U.S. Census Bureau, still hungry to improve its data processing capabilities.</p>
<p>No, computer users would not use punched cards forever, but they used them through the Apollo Moon-landing program and the height of the Cold War. Hollerith would likely have recognized the direct descendants of his 1890s census machinery almost 100 years later. </p>
<p><em>This is an updated version of an article originally published on October 15, 2019.</em></p>
<p>[ <em>You’re smart and curious about the world. So are The Conversation’s authors and editors.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=youresmart">You can read us daily by subscribing to our newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/172850/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Lindsay Roberts does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>As the country grew, each census required greater effort than the last. That problem led to the invention of the punched card – and the birth of an industry.David Lindsay Roberts, Adjunct Professor of Mathematics, Prince George's Community CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1656332021-08-19T12:07:14Z2021-08-19T12:07:14ZDigital health is a vital tool: here’s how we can make it more sustainable<p>The pandemic has shown us the extraordinary potential of digital health to fight global health inequalities by providing expanded access to healthcare: as well as by better <a href="https://www.theigc.org/publication/using-data-to-inform-the-covid-19-policy-response/">informing our responses</a> to health crises. </p>
<p>Tools such as wearable monitoring devices, video consultations, and even <a href="https://woebothealth.com/">chat-bots driven by AI</a> can provide care from a distance and often cost less than a face-to-face meeting with a doctor or nurse. This, in turn, can improve global access to high-quality treatment.</p>
<p>Throughout the pandemic, being able to collect real-time data from cases across the world has been vital to local and global responses to combat the virus and track its progress. <a href="https://www.dqindia.com/curious-case-genome-sequencing-covid19/">Machine learning</a> analysis of viral gene sequences, track-and-trace mobile apps and telehealth services have also played their part. But as this monumental shift towards digital health accelerates, the environmental issues it raises are often overlooked.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/415291/original/file-20210809-25-70ox7k.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Recommendations for improving digital health practices" src="https://images.theconversation.com/files/415291/original/file-20210809-25-70ox7k.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/415291/original/file-20210809-25-70ox7k.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=274&fit=crop&dpr=1 600w, https://images.theconversation.com/files/415291/original/file-20210809-25-70ox7k.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=274&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/415291/original/file-20210809-25-70ox7k.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=274&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/415291/original/file-20210809-25-70ox7k.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=345&fit=crop&dpr=1 754w, https://images.theconversation.com/files/415291/original/file-20210809-25-70ox7k.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=345&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/415291/original/file-20210809-25-70ox7k.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=345&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Nine recommendations from the Riyadh Declaration on Digital Health.</span>
<span class="attribution"><a class="source" href="http://fgfg">RGDHS 2020</a></span>
</figcaption>
</figure>
<p>Climate change disproportionately affects <a href="https://www.international.gc.ca/world-monde/issues_development-enjeux_developpement/environmental_protection-protection_environnement/climate-climatiques.aspx?lang=eng">developing countries</a>. Places that already face poor health outcomes are further subjected to the health effects of environmental change. Plus, considering that emissions from computing devices, data centres and communications networks already account for <a href="https://www.greencarcongress.com/2018/03/20180306-mcmaster.html">up to 4%</a> of global carbon emissions, leaving environmental factors out of digital health debates is a significant omission. </p>
<p>As we continue to roll out this indispensable infrastructure, we also need to assess how we can minimise its environmental impact. <a href="https://journals.sagepub.com/doi/full/10.1177/20552076211033421">My research</a> shows three main ways that digital health technologies can contribute to environmental change and what can be done.</p>
<h2>Green mining</h2>
<p>First, raw materials needed to produce digital health technologies including robotic tools, smartphones and cameras are taken from mines, which are mostly located in developing countries. </p>
<p>The toxic waste spillages that can occur when mining these materials create serious <a href="https://www.sciencedirect.com/science/article/abs/pii/S0013935116302249">environmental degradation</a>, potentially exposing workers to dangerous toxins. Meanwhile, at the other end of the process, the mishandling of discarded electrical devices can also release <a href="https://wedocs.unep.org/bitstream/handle/20.500.11822/9648/Waste_crime_RRA.pdf?sequence=1&isAllowed=y">toxic chemicals</a> into the environment, creating severe health risks for local populations – including <a href="https://elytus.com/blog/e-waste-and-its-negative-effects-on-the-environment.html">organ damage</a>. </p>
<p>On top of this, the carbon required to produce electronic devices makes up around <a href="https://www.greencarcongress.com/2018/03/20180306-mcmaster.html">8%</a> of all carbon produced globally. <a href="https://www.wired.com/story/ipads-crucial-health-tools-combating-covid-19/">Increased demand</a> for devices driven by digital health’s expansion will only push emissions higher.</p>
<p>Steps including developing <a href="https://www.ox.ac.uk/news/2021-06-29-oxford-scientists-show-how-green-mining-could-pave-way-net-zero-and-provide-metals">“green mining”</a> – mining practices that minimise environmental damage and emissions while maximising recycling and supply-chain efficiency – are vital to protect our planet alongside our health.</p>
<h2>Green cloud computing</h2>
<p>Second, from electronic health records to biometric data collected by wearable technologies, the digital health industry produces vast amounts of information. Health data accounts for around <a href="https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0493">30%</a> of the world’s data.</p>
<figure class="align-center ">
<img alt="A white hand holds a phone displaying a COVID vaccination passport screen." src="https://images.theconversation.com/files/416983/original/file-20210819-21-1sdajkp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/416983/original/file-20210819-21-1sdajkp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/416983/original/file-20210819-21-1sdajkp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/416983/original/file-20210819-21-1sdajkp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/416983/original/file-20210819-21-1sdajkp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/416983/original/file-20210819-21-1sdajkp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/416983/original/file-20210819-21-1sdajkp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Digital health tools, such as COVID-related apps, are generating more and more data globally.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/30478819@N08/51037382038">Marco Verch/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>This data and the insights it provides on population health are key to improving people’s health. But due to the electricity needed to run the huge servers that host cloud services, safely storing data in the cloud can take up to <a href="https://medium.com/stanford-magazine/carbon-and-the-cloud-d6f481b79dfe">one million times</a> more energy than saving data directly to devices. </p>
<p>To reduce the environmental impacts of data centres, initiatives like <a href="https://bigdataanalyticsnews.com/green-cloud-computing-sustainable-use/">green cloud computing</a> (which aims for carbon-neutral data processing, for example, by investing in <a href="https://theconversation.com/net-zero-carbon-neutral-carbon-negative-confused-by-all-the-carbon-jargon-then-read-this-151382">carbon offsets</a>) and <a href="https://css.umich.edu/factsheets/green-it-factsheet">virtualisation</a> (which reduces the physical numbers of servers needed to store data by shifting that data to virtual servers) should become key priorities.</p>
<p>The carbon costs of running artificial intelligence and <a href="https://www.news-medical.net/health/Blockchain-Applications-in-Healthcare.aspx#:%7E:text=The%20benefits%20of%20using%20blockchains,hands%20of%20unauthorized%20users%20by">blockchain health technologies</a> to better support patients are also significant. As such, the use of environmentally conscious technologies such as <a href="https://www.section.io/engineering-education/sustainable-ai-with-tinyml/">tiny machine learning</a> and <a href="https://www.livemint.com/Leisure/BLGIDxTCGLQ8XAEE0LyvTN/Making-Artificial-Intelligence-compact.html">compact AI</a>, that reduce software size and power, need to be implemented.</p>
<h2>Green IT</h2>
<p>Third, we need to consider whether the <a href="https://www.rcpjournals.org/content/futurehosp/8/1/e85">promise</a> that digital health will lower carbon emissions due to reducing travel to physical health centres is likely to materialise. </p>
<p>Although the increase in telehealth tech means that more patients are accessing healthcare from their homes or workplaces, these reductions in local travel are shown to have minimal effects on emissions and only become cost-effective when telehealth replaces local trips of at least <a href="https://www.rcpjournals.org/content/futurehosp/8/1/e85">7.2km</a> (or just over four miles).</p>
<p>A more pressing – and overlooked – concern, however, is the cost associated with housing large telehealth operations in call centres. As with cloud servers, telecommunications centres need vast amounts of energy to <a href="https://www.greencarcongress.com/2018/03/20180306-mcmaster.html">power and cool</a> equipment. </p>
<figure class="align-center ">
<img alt="A stack of computer servers with brightly coloured wires" src="https://images.theconversation.com/files/415835/original/file-20210812-12-1w81w72.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/415835/original/file-20210812-12-1w81w72.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=407&fit=crop&dpr=1 600w, https://images.theconversation.com/files/415835/original/file-20210812-12-1w81w72.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=407&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/415835/original/file-20210812-12-1w81w72.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=407&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/415835/original/file-20210812-12-1w81w72.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=512&fit=crop&dpr=1 754w, https://images.theconversation.com/files/415835/original/file-20210812-12-1w81w72.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=512&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/415835/original/file-20210812-12-1w81w72.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=512&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The servers required to host digital health data consume huge amounts of energy.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Multiple_Server_.jpg">Wikimedia</a></span>
</figcaption>
</figure>
<p>The NHS has recently pledged to achieve a <a href="https://www.england.nhs.uk/greenernhs/wp-content/uploads/sites/51/2020/10/delivering-a-net-zero-national-health-service.pdf">net zero</a> carbon footprint by 2040. However, as the recent <a href="https://www.ipcc.ch/report/ar6/wg1/downloads/report/IPCC_AR6_WGI_SPM.pdf">IPCC report</a> assessing the state of the world’s climate indicates, change must be more rapid.</p>
<p>In the Philippines – home to a large hub of international telehealth operators – <a href="https://dl.acm.org/doi/10.5555/2876911.2876913">green information technologies</a> such as recyclable office equipment and remote working are used to reduce the environmental costs associated with communication. Such practices must become commonplace. </p>
<p>Green initiatives should be adopted across the healthcare sector as far as possible. The problem is that many digital health technologies result from design decisions beyond the field of healthcare: so <a href="https://www.digitalhealth.net/2021/03/rewired-2021-big-tech-as-role-to-play-in-turning-the-tide-on-pandemic/">big tech</a> must also do its part in creating more sustainable systems. </p>
<p>Without taking such steps, we run the risk that digital health will only lead to additional global health burdens, particularly among the world’s most vulnerable populations.</p><img src="https://counter.theconversation.com/content/165633/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Maddy Thompson receives funding from The Leverhulme Trust through the Early Career Fellowship route. </span></em></p>As the pandemic pushes healthcare online, it’s time to stop overlooking the environmental impacts.Maddy Thompson, Postdoctoral Fellow in Human Geography, Keele UniversityLicensed as Creative Commons – attribution, no derivatives.