tag:theconversation.com,2011:/fr/topics/computers-2044/articlesComputers – The Conversation2024-03-26T17:01:56Ztag:theconversation.com,2011:article/2262572024-03-26T17:01:56Z2024-03-26T17:01:56ZHow long before quantum computers can benefit society? That’s Google’s US$5 million question<figure><img src="https://images.theconversation.com/files/583117/original/file-20240320-26-rmpub2.jpg?ixlib=rb-1.1.0&rect=5%2C0%2C3828%2C2160&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/quantum-computer-black-background-3d-render-1571871052">Bartlomiej K. Wroblewski / Shutterstock</a></span></figcaption></figure><p>Google and the XPrize Foundation have launched a competition worth US$5 million (£4 million) to develop <a href="https://blog.google/technology/research/google-gesda-and-xprize-launch-new-competition-in-quantum-applications/">real-world applications for quantum computers</a> that benefit society – by speeding up progress on one of the UN Sustainable Development Goals, for example. The principles of quantum physics suggest quantum computers could perform very fast calculations on particular problems, so this competition may expand the range of applications where they have an advantage over conventional computers.</p>
<p>In our everyday lives, the way nature works can generally be described by what we call <a href="https://en.wikipedia.org/wiki/Classical_physics#:%7E:text=Classical%20physical%20concepts%20are%20often,of%20quantum%20mechanics%20and%20relativity.">classical physics</a>. But nature behaves very differently at tiny quantum scales – below the size of an atom. </p>
<p>The race to harness quantum technology can be viewed as a new industrial revolution, progressing from devices that use the properties of classical physics to those utilising the <a href="https://www.energy.gov/science/doe-explainsquantum-mechanics#:%7E:text=Quantum%20mechanics%20is%20the%20field,%E2%80%9Cwave%2Dparticle%20duality.%E2%80%9D">weird and wonderful properties of quantum mechanics</a>. Scientists have spent decades trying to develop new technologies by harnessing these properties. </p>
<p>Given how often we are told that <a href="https://projects.research-and-innovation.ec.europa.eu/en/horizon-magazine/quantum-technologies">quantum technologies</a> will revolutionise our everyday lives, you may be surprised that we still have to search for practical applications by offering a prize. However, while there are numerous examples of success using quantum properties for enhanced precision in sensing and timing, there has been a surprising lack of progress in the development of quantum computers that outdo their classical predecessors.</p>
<p>The main bottleneck holding up this development is that the software – using <a href="https://www.nature.com/articles/npjqi201523">quantum algorithms</a> –
needs to demonstrate an advantage over computers based on classical physics. This is commonly known as <a href="https://theconversation.com/what-is-quantum-advantage-a-quantum-computing-scientist-explains-an-approaching-milestone-marking-the-arrival-of-extremely-powerful-computers-213306">“quantum advantage”</a>.</p>
<p>A crucial way quantum computing differs from classical computing is in using a property known as <a href="https://spectrum.ieee.org/what-is-quantum-entanglement">“entanglement”</a>. Classical computing <a href="https://web.stanford.edu/class/cs101/bits-bytes.html">uses “bits”</a> to represent information. These bits consist of ones and zeros, and everything a computer does comprises strings of these ones and zeros. But quantum computing allows these bits to be in a <a href="https://azure.microsoft.com/en-gb/resources/cloud-computing-dictionary/what-is-a-qubit">“superposition” of ones and zeros</a>. In other words, it is as if these ones and zeros occur simultaneously in the quantum bit, or qubit.</p>
<p>It is this property which allows computational tasks to be performed all at once. Hence the belief that quantum computing can offer a significant advantage over classical computing, as it is able to perform many computing tasks at the same time. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-is-quantum-advantage-a-quantum-computing-scientist-explains-an-approaching-milestone-marking-the-arrival-of-extremely-powerful-computers-213306">What is quantum advantage? A quantum computing scientist explains an approaching milestone marking the arrival of extremely powerful computers</a>
</strong>
</em>
</p>
<hr>
<h2>Notable quantum algorithms</h2>
<p>While performing many tasks simultaneously should lead to a performance increase over classical computers, putting this into practice has proven more difficult than theory would suggest. There are actually only a few notable quantum algorithms which can perform their tasks better than those using classical physics.</p>
<figure class="align-center ">
<img alt="Quantum chips - rendering" src="https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=369&fit=crop&dpr=1 600w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=369&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=369&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=464&fit=crop&dpr=1 754w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=464&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=464&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/futuristic-cpu-quantum-processor-global-computer-1210158169">Yurchanka Siarhei / Shutterstock</a></span>
</figcaption>
</figure>
<p>The most notable are the <a href="https://www.st-andrews.ac.uk/physics/quvis/simulations_html5/sims/cryptography-bb84/Quantum_Cryptography.html">BB84 protocol</a>, developed in 1984, and <a href="https://www.nature.com/articles/s41598-021-95973-w">Shor’s algorithm</a>, developed in 1994, both of which use entanglement to outperform classical algorithms on particular tasks. </p>
<p>The BB84 protocol is a cryptographic protocol – a system for ensuring secure, private communication between two or more parties which is considered more secure than comparable classical algorithms.</p>
<p>Shor’s algorithm uses entanglement to demonstrate how current <a href="https://www.rand.org/pubs/commentary/2023/09/when-a-quantum-computer-is-able-to-break-our-encryption.html#:%7E:text=One%20of%20the%20most%20important,secure%20internet%20traffic%20against%20interception.">classical encryption protocols can be broken</a>, because they are based on the factorisation of very large numbers. <a href="https://ieeexplore.ieee.org/document/365700">There is also evidence</a> that it can perform certain calculations faster than similar algorithms designed for conventional computers. </p>
<p>Despite the superiority of these two algorithms over conventional ones, few advantageous quantum algorithms have followed. However, researchers have not given up trying to develop them. Currently, there are a couple of main directions in research.</p>
<h2>Potential quantum benefits</h2>
<p>The first is to use quantum mechanics to assist in what are called <a href="https://arxiv.org/abs/2312.02279">large-scale optimisation tasks</a>. Optimisation – finding the best or most effective way to solve a particular task – is vital in everyday life, from ensuring traffic flow runs effectively, to managing operational procedures in factory pipelines, to streaming services deciding what to recommend to each user. It seems clear that quantum computers could help with these problems.</p>
<p>If we could reduce the computational time required to perform the optimisation, it could save energy, reducing the carbon footprint of the many computers currently performing these tasks around the world and the data centres supporting them.</p>
<p>Another development that could offer wide-reaching benefits is to use quantum computation to simulate systems, such as combinations of atoms, that behave according to quantum mechanics. Understanding and predicting how quantum systems work in practice could, for example, lead to better drug design and medical treatments. </p>
<p>Quantum systems could also lead to improved electronic devices. As computer chips get smaller, quantum effects take hold, potentially reducing the devices’s performance. A better fundamental understanding of quantum mechanics could help avoid this.</p>
<p>While there has been significant investment in building quantum computers, there has been less focus on ensuring they will directly benefit the public. However, that now appears to be changing.</p>
<p>Whether we will all have quantum computers in our homes within the next 20 years remains doubtful. But, given the current financial commitment to making quantum computation a practical reality, it seems that society is finally in a better position to make use of them. What precise form will this take? There’s US$5 million dollars on the line to find out.</p><img src="https://counter.theconversation.com/content/226257/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Adam Lowe does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Quantum computing has huge promise from a technical perspective, but the practical benefits are less clear.Adam Lowe, Lecturer, School of Computer Science and Digital Technologies, Aston UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2236762024-03-13T14:23:30Z2024-03-13T14:23:30ZAre you sitting too long in your office job? South African study offers some health tips<p>Sedentary behaviour can result in chronic health conditions such as obesity and heart diseases, bad news for the millions of office workers who are forced to sit behind their desks every day.</p>
<p>In its simplest form, the term “sedentary behaviour” describes sitting for too long and not breaking this up with movement. </p>
<p>Sitting for more than one hour without a break can be considered as <a href="https://doi.org/10.1007/s40279-023-01915-z">too much</a> sitting time.</p>
<p>Existing evidence shows that about 25% of adults and over 75% of adolescents worldwide are not meeting the World Health Organization physical activity recommendations.</p>
<p>The WHO’s latest <a href="https://www.who.int/publications/i/item/9789240015128">guidelines on physical activity and sedentary behaviour</a> recommend physical inactivity by around 15% in all age groups by 2030.</p>
<p>The WHO guidelines, released in 2020, are <a href="https://doi.org/10.1136/bjsports-2020-102955">particularly important</a> for office workers who spend a significant portion of their work day sitting. </p>
<p>As a specialist in the epidemiology of physical activity I took part in a new study into sedentary behaviour among a group of office-based workers in South Africa. </p>
<p>We found that conscious movement – even if this means standing instead of sitting – helped office workers stay healthy.</p>
<h2>What we found</h2>
<p>In our <a href="https://doi.org/10.1177/15598276231210479">study</a>, published in the American Journal of Lifestyle Medicine, colleagues and I collected data from 122 office workers based in Johannesburg. </p>
<p>The majority of the participants were female (68%) with an average age of 40 years. All had completed high school.</p>
<p>We measured body mass index, waist circumference, blood pressure and other key medical indicators of health and wellbeing.</p>
<p>Participants were asked to record their own sitting times. </p>
<p>We also used wrist-worn <a href="https://www.sciencedirect.com/topics/engineering/accelerometer-sensor">accelerometers</a> to collect data. This innovative method gave us a more accurate understanding of physical activity and sedentary behaviour compared to self-reported questionnaires.</p>
<p>The self-reported data by participants averaged 1.3 hours a day sitting for mealtimes, 3.9 hours a day related to work, 1.4 hours a day for transport, 1.7 hours a day for screen time, and 3.4 hours a day for other activities. This amounted to about 12 hours of sitting per day. </p>
<p>This was significantly higher than the three hours a day recorded by accelerometers. The discrepancy could partly be explained by participants – consciously or unconsciously – feeling more motivated to move when wearing these smart devices.</p>
<p>The blood pressure, cholesterol and <a href="https://medlineplus.gov/lab-tests/triglycerides-test/#:%7E:text=A%20triglycerides%20test%20is%20used,the%20risk%20of%20heart%20disease.">serum triglyceride levels</a> of our study participants were within normal ranges.</p>
<p>This finding is encouraging because they seemed to be at less risk of cardiovascular diseases. It could possibly be explained by the fact that these office workers were engaged in light physical activity for about two hours a day.</p>
<p>This also supports the WHO’s message emphasising that any movement is better than none. </p>
<p>They engaged in moderate to vigorous physical activity for about 50 minutes a day. This was less than the WHO recommendation of at least 60 minutes of moderate to vigorous physical activity a day. </p>
<p>In terms of overall health for South African office workers, our findings show that conscious efforts to move during the day can help them keep them healthier.</p>
<h2>Keep standing and keep moving</h2>
<p>Light intensity physical activity, sometimes referred to as gentle exercise, can include movement such as getting up, and walking to make a cup of coffee.</p>
<p>Office workers should stand whenever possible, as there is a growing body of evidence showing that standing is better than prolonged sitting, particularly for <a href="https://doi.org/10.1177/2047487317752186">energy expenditure</a> and reducing the risk of <a href="https://doi.org/10.1016/s0140-6736(16)30370-1">cardiovascular disease</a>.
It also activates muscle activity. </p>
<p>In practical terms, light physical activities during the workday could include:</p>
<ul>
<li><p>standing up to take a break</p></li>
<li><p>using a standing workstation</p></li>
<li><p>standing during meetings. </p></li>
</ul>
<p>Moderate to vigorous physical activities could include:</p>
<ul>
<li><p>using the stairs instead of the lifts</p></li>
<li><p>parking further from the office building entrance and walking at a brisk pace to the entrance</p></li>
<li><p>introducing fun activity breaks</p></li>
<li><p>engaging in various types of activities at a private gym close to work.</p></li>
</ul>
<p>Office workers should make conscious efforts to engage in physical activities during recreational times. </p>
<p>These could include: </p>
<ul>
<li><p>participating in weekly park runs</p></li>
<li><p>walking groups</p></li>
<li><p>structured sporting activities such as organised cycling, especially with safety and security in mind.</p></li>
</ul>
<p>In general, the more physically active people are, the better the health outcomes. </p>
<h2>Moving forward</h2>
<p>By 2021 worldwide obesity had nearly <a href="https://www.who.int/news-room/fact-sheets/detail/obesity-and-overweight">tripled</a> since 1975.</p>
<p>Physical inactivity is the leading modifiable risk factor for cardiometabolic diseases and all-cause mortality.</p>
<p>Studies such as ours help policymakers, businesses and companies formulate strategies to keep office workers healthier, promote a positive work-life balance and contribute to a more efficient work environment.</p>
<p>Our research underscores the urgent need to include more opportunities for movement into our everyday lives, especially important for South African office workers. </p>
<p>More studies from the global south are needed to develop context-relevant workplace interventions that can be easily integrated into employee health and wellness policy.</p><img src="https://counter.theconversation.com/content/223676/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Philippe Gradidge receives funding from the National Research Foundation (South Africa) and The South African Medical Research Council.</span></em></p>Office workers are sedentary for hours at a time. Our research looked at what they should do to stay healthy.Philippe Gradidge, Associate Professor, Department of Exercise Science and Sports Medicine, University of the WitwatersrandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2224082024-02-07T12:03:02Z2024-02-07T12:03:02ZUsing AI to monitor the internet for terror content is inescapable – but also fraught with pitfalls<figure><img src="https://images.theconversation.com/files/573450/original/file-20240205-17-4tssh6.jpg?ixlib=rb-1.1.0&rect=33%2C0%2C3693%2C2460&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">shutterstock</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/technology-security-concept-personal-authentication-system-709257292">metamorworks/Shutterstock</a></span></figcaption></figure><p>Every minute, millions of social media posts, photos and videos flood the internet. <a href="https://www.socialpilot.co/blog/social-media-statistics">On average</a>, Facebook users share 694,000 stories, X (formerly Twitter) users post 360,000 posts, Snapchat users send 2.7 million snaps and YouTube users upload more than 500 hours of video. </p>
<p>This vast ocean of online material needs to be constantly monitored for harmful or illegal content, like promoting terrorism and violence. </p>
<p>The sheer volume of content means that it’s not possible for people to inspect and check all of it manually, which is why automated tools, including artificial intelligence (AI), are essential. But such tools also have their limitations. </p>
<p>The concerted effort in recent years to <a href="https://www.tandfonline.com/doi/full/10.1080/1057610X.2023.2222901">develop tools</a> for the identification and removal of online terrorist content has, in part, been fuelled by the emergence of new laws and regulations. This includes the EU’s terrorist content online <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX%3A32021R0784">regulation</a>, which requires hosting service providers to remove terrorist content from their platform within one hour of receiving a removal order from a competent national authority.</p>
<h2>Behaviour and content-based tools</h2>
<p>In broad terms, there are two types of tools used to root out terrorist content. The first looks at certain account and message behaviour. This includes how old the account is, the use of trending or unrelated hashtags and abnormal posting volume. </p>
<p>In many ways, this is similar to spam detection, in that it does not pay attention to content, and is <a href="https://www.resolvenet.org/research/remove-impede-disrupt-redirect-understanding-combating-pro-islamic-state-use-file-sharing">valuable for detecting</a> the rapid dissemination of large volumes of content, which are often bot-driven. </p>
<p>The second type of tool is content-based. It focuses on linguistic characteristics, word use, images and web addresses. Automated content-based tools take <a href="https://tate.techagainstterrorism.org/news/tcoaireport">one of two approaches</a>. </p>
<p><strong>1. Matching</strong></p>
<p>The first approach is based on comparing new images or videos to an existing database of images and videos that have previously been identified as terrorist in nature. One challenge here is that terror groups are known to try and evade such methods by producing subtle variants of the same piece of content. </p>
<p>After the Christchurch terror attack in New Zealand in 2019, for example, hundreds of visually distinct versions of the livestream video of the atrocity <a href="https://about.fb.com/news/2019/03/technical-update-on-new-zealand/">were in circulation</a>. </p>
<p>So, to combat this, matching-based tools generally use <a href="https://about.fb.com/news/2019/08/open-source-photo-video-matching/">perceptual hashing</a> rather than cryptographic hashing. Hashes are a bit like digital fingerprints, and cryptographic hashing acts like a secure, unique identity tag. Even changing a single pixel in an image drastically alters its fingerprint, preventing false matches. </p>
<p>Perceptual hashing, on the other hand, focuses on similarity. It overlooks minor changes like pixel colour adjustments, but identifies images with the same core content. This makes perceptual hashing more resilient to tiny alterations to a piece of content. But it also means that the hashes are not entirely random, and so could potentially be used to try and <a href="https://towardsdatascience.com/black-box-attacks-on-perceptual-image-hashes-with-gans-cc1be11f277">recreate</a> the original image.</p>
<figure class="align-center ">
<img alt="A close up of a mobile phone screen displaying several social media apps." src="https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Millions of posts, images and videos are uploaded to social media platforms every minute.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/moscow-russia-29072023-new-elon-musks-2339442245">Viktollio/Shutterstock</a></span>
</figcaption>
</figure>
<p><strong>2. Classification</strong></p>
<p>The second approach relies on classifying content. It <a href="https://www.cambridgeconsultants.com/insights/whitepaper/ofcom-use-ai-online-content-moderation">uses</a> machine learning and other forms of AI, such as natural language processing. To achieve this, the AI needs a lot of examples like texts labelled as terrorist content or not by human content moderators. By analysing these examples, the AI learns which features distinguish different types of content, allowing it to categorise new content on its own. </p>
<p>Once trained, the algorithms are then able to predict whether a new item of content belongs to one of the specified categories. These items may then be removed or flagged for human review. </p>
<p>This approach also <a href="https://tate.techagainstterrorism.org/news/tcoaireport">faces challenges</a>, however. Collecting and preparing a large dataset of terrorist content to train the algorithms is time-consuming and <a href="https://oro.open.ac.uk/69799/">resource-intensive</a>. </p>
<p>The training data may also become dated quickly, as terrorists make use of new terms and discuss new world events and current affairs. Algorithms also have difficulty understanding context, including <a href="https://doi.org/10.1177/2053951719897945">subtlety and irony</a>. They also <a href="https://cdt.org/wp-content/uploads/2017/11/Mixed-Messages-Paper.pdf">lack</a> cultural sensitivity, including variations in dialect and language use across different groups. </p>
<p>These limitations can have important offline effects. There have been documented failures to remove hate speech in countries such as <a href="https://restofworld.org/2021/why-facebook-keeps-failing-in-ethiopia/">Ethiopia</a> and <a href="https://www.newamerica.org/the-thread/facebooks-content-moderation-language-barrier/">Romania</a>, while free speech activists in countries such as <a href="https://www.middleeasteye.net/news/revealed-seven-years-later-how-facebook-shuts-down-free-speech-egypt">Egypt</a>, <a href="https://syrianobserver.com/news/58430/facebook-deletes-accounts-of-assad-opponents.html">Syria</a> and <a href="https://www.accessnow.org/transparency-required-is-facebooks-effort-to-clean-up-operation-carthage-damaging-free-expression-in-tunisia/">Tunisia</a> have reported having their content removed.</p>
<h2>We still need human moderators</h2>
<p>So, in spite of advances in AI, human input remains essential. It is important for maintaining databases and datasets, assessing content flagged for review and operating appeals processes for when decisions are challenged. </p>
<p>But this is demanding and draining work, and there have been <a href="https://www.wired.co.uk/article/facebook-content-moderators-ireland">damning reports</a> regarding the working conditions of moderators, with many tech companies such as Meta <a href="https://www.stern.nyu.edu/experience-stern/faculty-research/who-moderates-social-media-giants-call-end-outsourcing">outsourcing</a> this work to third-party vendors. </p>
<p>To address this, we <a href="https://tate.techagainstterrorism.org/news/tcoaireport">recommend</a> the development of a set of minimum standards for those employing content moderators, including mental health provision. There is also potential to develop AI tools to safeguard the wellbeing of moderators. This would work, for example, by blurring out areas of images so that moderators can reach a decision without viewing disturbing content directly. </p>
<p>But at the same time, few, if any, platforms have the resources needed to develop automated content moderation tools and employ a sufficient number of human reviewers with the required expertise. </p>
<p>Many platforms have turned to off-the-shelf products. It is estimated that the content moderation solutions market will be <a href="https://www.prnewswire.com/news-releases/content-moderation-solutions-market-to-cross-us-32-bn-by-2031-tmr-report-301514155.html">worth $32bn by 2031</a>. </p>
<p>But caution is needed here. Third-party providers are not currently subject to the same level of oversight as tech platforms themselves. They may rely disproportionately on automated tools, with insufficient human input and a lack of transparency regarding the datasets used to train their algorithms.</p>
<p>So, collaborative initiatives between governments and the private sector are essential. For example, the EU-funded <a href="https://tate.techagainstterrorism.org/">Tech Against Terrorism Europe</a> project has developed valuable resources for tech companies. There are also examples of automated content moderation tools being made openly available like Meta’s <a href="https://about.fb.com/news/2022/12/meta-launches-new-content-moderation-tool/">Hasher-Matcher-Actioner</a>, which companies can use to build their own database of hashed terrorist content. </p>
<p>International organisations, governments and tech platforms must prioritise the development of such collaborative resources. Without this, effectively addressing online terror content will remain elusive.</p><img src="https://counter.theconversation.com/content/222408/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stuart Macdonald receives funding from the EU Internal Security Fund for the project Tech Against Terrorism Europe (ISF-2021-AG-TCO-101080101). </span></em></p><p class="fine-print"><em><span>Ashley A. Mattheis receives funding from the EU Internal Security Fund for the project Tech Against Terrorism Europe (ISF-2021-AG-TCO-101080101).</span></em></p><p class="fine-print"><em><span>David Wells receives funding from the Council of Europe to conduct an analysis of emerging patterns of misuse of technology by terrorist actors (ongoing)</span></em></p>The complex task of tackling online terror needs human eyes as well as artificial intelligence.Stuart Macdonald, Professor of Law, Swansea UniversityAshley A. Mattheis, Postdoctoral Researcher, School of Law and Government, Dublin City UniversityDavid Wells, Honorary Research Associate at the Cyber Threats Research Centre, Swansea UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2216932024-01-25T16:18:26Z2024-01-25T16:18:26ZSwimming pools could slash bills by harvesting heat from servers – here’s how to make it work<figure><img src="https://images.theconversation.com/files/571458/original/file-20240125-23-j7stsw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">More swimming pools could soon be heated using excess heat emitted from data centres in more than 150 UK locations</span> </figcaption></figure><p>My teenage son regularly complains about his room being too warm, even during winter. While the rest of the house is at a comfortable temperature, the video game PC he plays emits a significant amount of heat. </p>
<p>A high-spec computer like his typically has a power of more than 800 watts. That’s enough to <a href="https://www.omnicalculator.com/physics/water-heating">boil one litre of water</a> from room temperature in less than eight minutes. So at scale, computer data processing centres are a massive, and predominantly untapped, source of heat or thermal energy. </p>
<p>A new project run by UK tech firm Deep Green is recapturing some of this heat and using it to warm swimming pools. In March 2023, the company connected <a href="https://www.theguardian.com/business/2023/mar/14/innovative-heat-tech-save-england-swimming-pools-from-closure">Exmouth Leisure Centre’s</a> 25-metre pool in Devon to an onsite data centre, slashing their <a href="https://www.theguardian.com/environment/2024/jan/15/energy-innovation-could-heat-150-uk-swimming-pools-after-200m-investment#:%7E:text=In%20Exmouth%2C%20the%20swimming%20pool,edge%20over%20traditional%20data%20centres">heating bill by 60%</a>. </p>
<p>Now, that <a href="https://www.theguardian.com/environment/2024/jan/15/energy-innovation-could-heat-150-uk-swimming-pools-after-200m-investment">pilot is scaling up</a> and 150 public swimming pools could soon be recycling excess heat and cutting their energy bills at the same time. </p>
<h2>The power of data</h2>
<p>The potential to improve energy efficiency is huge. Just like my son’s computer, each data centre produces excess heat from huge servers that work around the clock. Depending on their size, data centre buildings contain thousands, and sometimes <a href="https://www.racksolutions.com/news/blog/how-many-servers-does-a-data-center-have/#:%7E:text=To%20summarize%2C%20a%20data%20center,servers%20in%20a%2052U%20Rack.">millions</a> of computer servers arranged in <a href="https://www.youtube.com/watch?v=zDAYZU4A3w0">rows of special IT racks</a>. </p>
<p>Just as individual computers need high-speed fans and heat sinks on components such as microprocessors, data centres rely on cooling towers to dissipate excess heat to the surrounding environment. That cooling accounts for about <a href="https://www.device42.com/data-center-infrastructure-management-guide/data-center-power/">50% </a> of total electricity consumption in data centres.</p>
<p>There were approximately 10,978 data centres <a href="https://brightlio.com/data-center-stats/#:%7E:text=The%20global%20data%20center%20market%20is%20massive%20and%20growing.,%3A%20Cloudscene%2C%20Datacentermap%2C%20Statista">worldwide in 2023</a>.
With increasing online connectivity, expansion of AI applications and cloud services, many <a href="https://www.bbc.co.uk/news/uk-england-beds-bucks-herts-68028666">more data centres</a> will be needed in the future. </p>
<p>In 2022, global data centre electricity consumption was between 240-340 TWh, according to the <a href="https://www.iea.org/energy-system/buildings/data-centres-and-data-transmission-networks">International Energy Agency</a>. This is vast given that the UK’s total annual energy consumption was about <a href="https://www.statista.com/statistics/323410/domestic-electricity-volumes-in-the-united-kingdom-uk/">275 TWh</a> in the same year. In the Republic of Ireland, for example, data centres consumed <a href="https://www.bbc.co.uk/news/articles/cpe9l5ke5jvo">20% of the total electricity consumption</a> in 2022. </p>
<h2>Successful storage</h2>
<p>Heat emitted from data centres can be harnessed to improve energy efficiency and profitability and water is an ideal heat storage medium. We already use water-filled radiators to heat buildings. That’s because water has high thermal capacity and can store more heat than the same volume of air could. </p>
<p>There is a growing awareness of the need to reuse <a href="https://www.theguardian.com/environment/2023/feb/23/recapturing-excess-heat-could-power-most-of-europe-say-experts">excess heat in cities</a> to reduce the impact on the environmental. Public swimming pools offer one ideal solution. </p>
<p>With huge quantities of pool water and showers that need heating, plus huge air-filled spaces within the buildings that must be warm, swimming pools are an ideal consumer of data centre heat. These facilities are also normally open daily beyond the hours of 9am to 5pm, hence swimming pools use much more hot water and need heating for longer than office buildings.</p>
<p>With the recent increase in energy prices, many swimming pools are <a href="https://www.theguardian.com/environment/2024/jan/15/energy-innovation-could-heat-150-uk-swimming-pools-after-200m-investment">suffering financially</a> and would be struggling to increase prices due to the cost of living and energy poverty. In fact, across England, <a href="https://www.theguardian.com/society/2023/mar/12/england-has-lost-almost-400-swimming-pools-since-2010">85 swimming pools have been closed</a> since 2019, mainly due to financial reasons. </p>
<p>Economic difficulties due to rising energy prices could have a potential negative public <a href="https://www.swimming.org/swimengland/more-tier-four-areas/">health effect</a> due to the lack of swimming pool facilities driven by high energy costs. So the successful use of waste heat from data centres to heat swimming pools could have many benefits. </p>
<p>If higher temperatures of water are needed than what cooling systems in data centres could provide, <a href="https://www.sciencedirect.com/science/article/pii/S0378778823008861?via%3Dihub">heat pumps</a> could be used to upgrade the temperatures to levels suitable to provide hot water and <a href="https://www.sciencedirect.com/science/article/abs/pii/S0306261917315830?via%3Dihub">efficient heating systems</a>. </p>
<p>A typical heat pump in this configuration could produce water at temperatures at <a href="https://assets.publishing.service.gov.uk/media/5a8033fae5274a2e87db874d/Domestic_High_Temperature_HPs-_FINAL2.pdf">around 60-65°C</a>. For saunas and steam rooms, where the required temperature above water’s boiling point, electric heaters could provide an additional boost. </p>
<p>How can this play a role in transitioning towards a low-carbon future?
We need to design infrastructure that brings sources of waste heat closer to the facilities that demand heat. </p>
<p><a href="https://www.sciencedirect.com/science/article/pii/S1876610217307890?via%3Dihub">My team’s research has shown</a> that this could save energy and money, particularly in the case of integrating water in flooded coal mines with waste heat from gas engines to provide extremely efficient heating systems. </p>
<p>So data centres need to be built near facilities that can consume excess heat, such as pools, sports centres and other public buildings with a high demand for space and water heating. To implement that, we need <a href="https://www.danfoss.com/en-gb/about-danfoss/news/cf/danfoss-google-microsoft-and-schneider-electric-join-forces-in-new-innovation-hub-to-accelerate-green-transition-of-data-centres/">better communication</a> between council authorities, businesses and communities that can help expand the scope of this technology. For a seriously streamlined system, we need more forward-thinking urban planning. </p>
<hr>
<figure class="align-right ">
<img alt="Imagine weekly climate newsletter" src="https://images.theconversation.com/files/434988/original/file-20211201-21-13avx6y.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/434988/original/file-20211201-21-13avx6y.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/434988/original/file-20211201-21-13avx6y.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/434988/original/file-20211201-21-13avx6y.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/434988/original/file-20211201-21-13avx6y.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/434988/original/file-20211201-21-13avx6y.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/434988/original/file-20211201-21-13avx6y.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><strong><em>Don’t have time to read about climate change as much as you’d like?</em></strong>
<br><em><a href="https://theconversation.com/uk/newsletters/imagine-57?utm_source=TCUK&utm_medium=linkback&utm_campaign=Imagine&utm_content=DontHaveTimeTop">Get a weekly roundup in your inbox instead.</a> Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. <a href="https://theconversation.com/uk/newsletters/imagine-57?utm_source=TCUK&utm_medium=linkback&utm_campaign=Imagine&utm_content=DontHaveTimeBottom">Join the 30,000+ readers who’ve subscribed so far.</a></em></p>
<hr><img src="https://counter.theconversation.com/content/221693/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Amin Al-Habaibeh receives funding from Innovate UK, The British Council, Royal academy of Engineering, UK Research and Innovation and the European Commission. </span></em></p>Computer data centres produce vast amounts of heat that often gets wasted - now that’s being harnessed to warm swimming pools and improve overall energy efficiency.Amin Al-Habaibeh, Professor of Intelligent Engineering Systems, Nottingham Trent UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2205352024-01-19T13:42:02Z2024-01-19T13:42:02ZMac at 40: User experience was the innovation that launched a technology revolution<figure><img src="https://images.theconversation.com/files/569686/original/file-20240116-19-t76qy0.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C1159%2C877&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The original Macintosh computer may seem quaint today, but the way users interacted with it triggered a revolution 40 years ago.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/markgregory/35604028241"> Mark Mathosian/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA</a></span></figcaption></figure><p>Technology innovation requires solving hard technical problems, right? Well, yes. And no. As the Apple Macintosh turns 40, what began as Apple prioritizing the squishy concept of “user experience” in its 1984 flagship product is, today, clearly vindicated by its blockbuster products since.</p>
<p>It turns out that designing for usability, efficiency, accessibility, elegance and delight pays off. Apple’s market capitalization is now over US$2.8 trillion, and its brand is every bit associated with the term “design” as the best New York or Milan fashion houses are. Apple turned technology into fashion, and it did it through user experience.</p>
<p>It began with the Macintosh.</p>
<p>When Apple announced the Macintosh personal computer with a Super Bowl XVIII <a href="https://invention.si.edu/remembering-apple-s-1984-super-bowl-ad">television ad</a> on Jan. 22, 1984, it more resembled a movie premiere than a technology release. The commercial was, in fact, directed by filmmaker Ridley Scott. That’s because founder Steve Jobs knew he was not selling just computing power, storage or a desktop publishing solution. Rather, Jobs was selling a product for human beings to use, one to be taken into their homes and integrated into their lives.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/2zfqw8nhUwA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Apple’s 1984 Super Bowl commercial is as iconic as the product it introduced.</span></figcaption>
</figure>
<p>This was not about computing anymore. IBM, Commodore and Tandy did computers. As a <a href="https://scholar.google.com/citations?hl=en&user=TmZ3howAAAAJ&view_op=list_works&sortby=pubdate">human-computer interaction scholar</a>, I believe that the first Macintosh was about humans feeling comfortable with a new extension of themselves, not as computer hobbyists but as everyday people. All that “computer stuff” – circuits and wires and separate motherboards and monitors – were neatly packaged and hidden away within one sleek integrated box.</p>
<p>You weren’t supposed to dig into that box, and you didn’t need to dig into that box – not with the Macintosh. The everyday user wouldn’t think about the contents of that box any more than they thought about the stitching in their clothes. Instead, they would focus on how that box <a href="https://doi.org/10.1016/j.intcom.2010.04.002">made them feel</a>.</p>
<h2>Beyond the mouse and desktop metaphor</h2>
<p>As computers go, was the Macintosh innovative? Sure. But not for any particular computing breakthrough. The Macintosh was not the first computer to have a graphical user interface or employ the desktop metaphor: icons, files, folders, windows and so on. The Macintosh was not the first personal computer meant for home, office or educational use. It was not the first computer to use a mouse. It was not even the first computer from Apple to be or have any of these things. The <a href="https://doi.org/10.1145/242388.242405">Apple Lisa</a>, released a year before, had them all.</p>
<p>It was not any one technical thing that the Macintosh did first. But the Macintosh brought together numerous advances that were about giving people an accessory – not for geeks or techno-hobbyists, but for home office moms and soccer dads and eighth grade students who used it to write documents, edit spreadsheets, make drawings and play games. The Macintosh revolutionized the personal computing industry and everything that was to follow because of its emphasis on providing a satisfying, simplified user experience.</p>
<p>Where computers typically had complex input sequences in the form of typed commands (Unix, MS-DOS) or multibutton mice (Xerox STAR, Commodore 64), the Macintosh used a <a href="https://everest-pipkin.com/writing/beautiful_house.pdf">desktop metaphor</a> in which the computer screen presented a representation of a physical desk surface. Users could click directly on files and folders on the desktop to open them. It also had a one-button mouse that allowed users to click, double-click and drag-and-drop icons without typing commands.</p>
<p>The <a href="https://spectrum.ieee.org/xerox-alto">Xerox Alto</a> had first exhibited the concept of icons, invented in David Canfield Smith’s <a href="https://doi.org/10.1007/978-3-0348-5744-4">1975 Ph.D. dissertation</a>. The 1981 <a href="http://dl.acm.org/citation.cfm?id=66893.66894">Xerox Star</a> and 1983 Apple Lisa had used desktop metaphors. But these systems had been slow to operate and still cumbersome in many aspects of their interaction design.</p>
<p>The Macintosh simplified the interaction techniques required to operate a computer and improved functioning to reasonable speeds. Complex keyboard commands and dedicated keys were replaced with point-and-click operations, pull-down menus, draggable windows and icons, and systemwide undo, cut, copy and paste. Unlike with the Lisa, the Macintosh could run only one program at a time, but this simplified the user experience.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/2B-XwPjn9YY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Apple CEO Steve Jobs introduced the Macintosh on Jan. 24, 1984.</span></figcaption>
</figure>
<p>The Macintosh also provided a user interface toolbox for application developers, enabling applications to have a standard look and feel by using common interface widgets such as buttons, menus, fonts, dialog boxes and windows. With the Macintosh, the learning curve for users was flattened, allowing people to feel proficient in short order. Computing, like clothing, was now for everyone.</p>
<h2>A good experience</h2>
<p>Although I hesitate to use the cliches “natural” or “intuitive” when it comes to fabricated worlds on a screen – nobody is born knowing what a desktop window, pull-down menu or double-click is – the Macintosh was the first personal computer to make user experience the driver of technical achievement. It indeed was <a href="https://www.computerhistory.org/revolution/personal-computers/17/303">simple to operate</a>, especially compared with command-line computers at the time.</p>
<p>Whereas prior systems prioritized technical capability, the Macintosh was intended for nonspecialist users – at work, school or in the home – to experience a kind of out-of-the-box usability that today is the hallmark of not only most Apple products but an entire industry’s worth of consumer electronics, smart devices and computers of every kind.</p>
<p>According to Market Growth Reports, companies devoted to providing user experience tools and services <a href="https://www.marketgrowthreports.com/global-user-experience-ux-market-26446759">were worth $548.91 million in 2023</a> and are expected to reach $1.36 billion by 2029. User experience companies provide software and services to support usability testing, user research, <a href="https://dynamics.microsoft.com/en-us/customer-voice/what-is-the-voice-of-customer/">voice-of-the-customer</a> initiatives and user interface design, among many other user experience activities.</p>
<p>Rarely today do consumer products succeed in the market based on functionality alone. Consumers <a href="https://doi.org/10.1111/j.1948-7169.2006.tb00027.x">expect a good user experience and will pay a premium for it</a>. The Macintosh <a href="https://biztechmagazine.com/article/2019/01/original-apple-macintosh-revolutionized-personal-computing">started that obsession</a> and demonstrated its centrality. </p>
<p>It is ironic that the Macintosh technology being commemorated in January 2024 was never really about technology at all. It was always about people. This is inspiration for those looking to make the next technology breakthrough, and a warning to those who would dismiss the user experience as only of secondary concern in technological innovation.</p><img src="https://counter.theconversation.com/content/220535/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>I have had two Ph.D. students receive Apple Ph.D. AI/ML Fellowships. This funding does not support me personally, but supports two of the Ph.D. students that I have advised. They obtained these fellowships through competitive submissions to Apple based on an open solicitation.</span></em></p>Apple’s phenomenal success and the field of user experience design can be traced back to the launch of the Macintosh personal computer.Jacob O. Wobbrock, Professor of Information, University of WashingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2200442023-12-18T16:17:12Z2023-12-18T16:17:12ZA new supercomputer aims to closely mimic the human brain — it could help unlock the secrets of the mind and advance AI<figure><img src="https://images.theconversation.com/files/566252/original/file-20231218-15-hajmbj.jpg?ixlib=rb-1.1.0&rect=19%2C9%2C6470%2C3940&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/businessman-touching-digital-human-brain-cell-582507070">Sdecoret / Shutterstock</a></span></figcaption></figure><p>A supercomputer scheduled to go online in April 2024 will rival the estimated rate of operations in the human brain, <a href="https://www.westernsydney.edu.au/newscentre/news_centre/more_news_stories/world_first_supercomputer_capable_of_brain-scale_simulation_being_built_at_western_sydney_university">according to researchers in Australia</a>. The machine, called DeepSouth, is capable of performing 228 trillion operations per second. </p>
<p>It’s the world’s first supercomputer capable of simulating networks of neurons and synapses (key biological structures that make up our nervous system) at the scale of the human brain.</p>
<p>DeepSouth belongs to an approach <a href="https://www.nature.com/articles/s43588-021-00184-y">known as neuromorphic computing</a>, which aims to mimic the biological processes of the human brain. It will be run from the International Centre for Neuromorphic Systems at Western Sydney University.</p>
<p>Our brain is the most amazing computing machine we know. By distributing its
computing power to billions of small units (neurons) that interact through trillions of connections (synapses), the brain can rival the most powerful supercomputers in the world, while requiring only the same power used by a fridge lamp bulb.</p>
<p>Supercomputers, meanwhile, generally take up lots of space and need large amounts of electrical power to run. The world’s most powerful supercomputer, the <a href="https://www.hpe.com/uk/en/compute/hpc/cray/oak-ridge-national-laboratory.html">Hewlett Packard Enterprise Frontier</a>, can perform just over one quintillion operations per second. It covers 680 square metres (7,300 sq ft) and requires 22.7 megawatts (MW) to run. </p>
<p>Our brains can perform the same number of operations per second with just 20 watts of power, while weighing just 1.3kg-1.4kg. Among other things, neuromorphic computing aims to unlock the secrets of this amazing efficiency.</p>
<h2>Transistors at the limits</h2>
<p>On June 30 1945, the mathematician and physicist <a href="https://www.ias.edu/von-neumann">John von Neumann</a> described the design of a new machine, the <a href="https://ieeexplore.ieee.org/document/194089">Electronic Discrete Variable Automatic Computer (Edvac)</a>. This effectively defined the modern electronic computer as we know it. </p>
<p>My smartphone, the laptop I am using to write this article and the most powerful supercomputer in the world all share the same fundamental structure introduced by von Neumann almost 80 years ago. <a href="https://www.sciencedirect.com/topics/computer-science/von-neumann-architecture">These all have distinct processing and memory units</a>, where data and instructions are stored in the memory and computed by a processor.</p>
<p>For decades, the number of transistors on a microchip doubled approximately every two years, <a href="https://ieeexplore.ieee.org/abstract/document/591665">an observation known as Moore’s Law</a>. This allowed us to have smaller and cheaper computers. </p>
<p>However, transistor sizes are now approaching the atomic scale. At these tiny sizes, excessive heat generation is a problem, as is a phenomenon called quantum tunnelling, which interferes with the functioning of the transistors. <a href="https://qz.com/852770/theres-a-limit-to-how-small-we-can-make-transistors-but-the-solution-is-photonic-chips#:%7E:text=They're%20made%20of%20silicon,we%20can%20make%20a%20transistor.">This is slowing down</a> and will eventually halt transistor miniaturisation.</p>
<p>To overcome this issue, scientists are exploring new approaches to
computing, starting from the powerful computer we all have hidden in our heads, the human brain. Our brains do not work according to John von Neumann’s model of the computer. They don’t have separate computing and memory areas. </p>
<p>They instead work by connecting billions of nerve cells that communicate information in the form of electrical impulses. Information can be passed from <a href="https://qbi.uq.edu.au/brain-basics/brain/brain-physiology/action-potentials-and-synapses">one neuron to the next through a junction called a synapse</a>. The organisation of neurons and synapses in the brain is flexible, scalable and efficient. </p>
<p>So in the brain – and unlike in a computer – memory and computation are governed by the same neurons and synapses. Since the late 1980s, scientists have been studying this model with the intention of importing it to computing.</p>
<figure class="align-center ">
<img alt="Microchip." src="https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/566265/original/file-20231218-25-yjbwxy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The continuing miniaturisation of transistors on microchips is limited by the laws of physics.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/close-presentation-new-generation-microchip-gloved-691548583">Gorodenkoff / Shutterstock</a></span>
</figcaption>
</figure>
<h2>Imitation of life</h2>
<p>Neuromorphic computers are based on intricate networks of simple, elementary processors (which act like the brain’s neurons and synapses). The main advantage of this is that these machines <a href="https://www.electronicsworld.co.uk/advances-in-parallel-processing-with-neuromorphic-analogue-chip-implementations/34337/">are inherently “parallel”</a>. </p>
<p>This means that, <a href="https://www.pnas.org/doi/full/10.1073/pnas.95.3.933">as with neurons and synapses</a>, virtually all the processors in a computer can potentially be operating simultaneously, communicating in tandem.</p>
<p>In addition, because the computations performed by individual neurons and synapses are very simple compared with traditional computers, the energy consumption is orders of magnitude smaller. Although neurons are sometimes thought of as processing units, and synapses as memory units, they contribute to both processing and storage. In other words, data is already located where the computation requires it.</p>
<p>This speeds up the brain’s computing in general because there is no separation between memory and processor, which in classical (von Neumann) machines causes a slowdown. But it also avoids the need to perform a specific task of accessing data from a main memory component, as happens in conventional computing systems and consumes a considerable amount of energy. </p>
<p>The principles we have just described are the main inspiration for DeepSouth. This is not the only neuromorphic system currently active. It is worth mentioning the <a href="https://www.humanbrainproject.eu">Human Brain Project (HBP)</a>, funded under an <a href="https://ec.europa.eu/futurium/en/content/fet-flagships.html">EU initiative</a>. The HBP was operational from 2013 to 2023, and led to BrainScaleS, a machine located in Heidelberg, in Germany, that emulates the way that neurons and synapses work. </p>
<p><a href="https://www.humanbrainproject.eu/en/science-development/focus-areas/neuromorphic-computing/hardware/">BrainScaleS</a> can simulate the way that neurons “spike”, the way that an electrical impulse travels along a neuron in our brains. This would make BrainScaleS an ideal candidate to investigate the mechanics of cognitive processes and, in future, mechanisms underlying serious neurological and neurodegenerative diseases.</p>
<p>Because they are engineered to mimic actual brains, neuromorphic computers could be the beginning of a turning point. Offering sustainable and affordable computing power and allowing researchers to evaluate models of neurological systems, they are an ideal platform for a range of applications. They have the potential to both advance our understanding of the brain and offer new approaches to artificial intelligence.</p><img src="https://counter.theconversation.com/content/220044/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Domenico Vicinanza does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Neuromorphic computers aim to one day replicate the amazing efficiency of the brain.Domenico Vicinanza, Associate Professor of Intelligent Systems and Data Science, Anglia Ruskin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2191582023-12-17T13:41:47Z2023-12-17T13:41:47ZWhy the American technological war against China could backfire<iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/why-the-american-technological-war-against-china-could-backfire" width="100%" height="400"></iframe>
<p>The <a href="https://www.bloomberg.com/news/articles/2023-10-15/us-will-tighten-curbs-on-china-s-access-to-advanced-chip-tech">technological war</a> waged by the United States <a href="https://www.nytimes.com/2023/08/09/us/politics/biden-ban-china-investment.html">against China</a> <a href="https://www.bloomberg.com/news/newsletters/2023-09-29/are-us-technology-sanctions-against-china-backfiring">has the potential to backfire</a>, <a href="https://www.nytimes.com/2023/07/12/magazine/semiconductor-chips-us-china.html">supercharging China’s creation of an independent computer chip industry</a> <a href="https://www.youtube.com/watch?v=SUfjtKtkS2U&t=16s">that would directly compete with American manufacturers</a>. </p>
<p>U.S. President Joe Biden’s administration has employed <a href="https://www.bnnbloomberg.ca/us-lawmakers-target-china-with-export-controls-sanctions-bills-1.2011207">increasingly restrictive sanctions</a> to prevent American and allied chip manufacturers from selling their most advanced products to China.</p>
<p>These restrictions are aimed at <a href="https://asiatimes.com/2022/12/us-chip-ban-wont-short-circuit-chinas-military-power/">preventing China’s military</a> from <a href="https://www.eastasiaforum.org/2023/08/30/2099944/">developing more sophisticated weapons</a>. However, the People’s Liberation Army uses very few high-tech chips. The tech war seems designed to cripple China’s overall technological development and, by extension, its economic growth and prosperity. </p>
<h2>Cautionary tale</h2>
<p>Ongoing <a href="https://www.politico.eu/article/west-world-tour-huawei-china-telecom/">American efforts</a> to cripple the Chinese telecom company Huawei may serve as a cautionary tale for the U.S.</p>
<p>American technological sanctions damaged the company and its role as a leading global producer of cellphones, but Huawei has <a href="https://thediplomat.com/2023/01/chinas-huawei-looks-to-ports-factories-to-rebuild-sales/#:%7E:text=Huawei%20is%20reinventing%20itself%20as,sanctions%20crushed%20its%20smartphone%20brand.">reinvented itself</a> as a cloud computing network company. </p>
<p>It has also re-entered the cellphone market, introducing its <a href="https://asia.nikkei.com/Business/Technology/Huawei-Mate-60-Pro-teardown-reveals-47-Chinese-parts-in-phone">Mate 60 phone that boasts Chinese-designed and manufactured seven-nanometre</a> computer chips. American tech restrictions were meant to keep China stuck at manufacturing no more than 14-nanometre chips, keeping it at least eight to 10 years behind U.S. technology.</p>
<p>The accomplishment means that China is gaining ground on the U.S. </p>
<p>Recently, Huawei introduced <a href="https://www.gizchina.com/2023/10/31/former-tsmc-ibm-exec-reveals-huaweis-capability-to-produce-cutting-edge-5nm-chips/">a computer with five-nanometre chip</a>, further closing the gap with the West. </p>
<p>Western observers <a href="https://www.nytimes.com/2021/07/04/technology/tech-cold-war-chips.html">have argued that the production of high-end microchips</a> requires international co-operation. </p>
<p><a href="https://www.youtube.com/watch?v=08myo1UdTZ8&t=12s">The Netherlands’ ASML</a> is the only company with the advanced lithography equipment needed to make three-nanometre chips. ASML built its machine using technologies from about seven other countries and took 20 years to get to market. Therefore, China is unlikely to succeed if it’s relying only on itself to create independent capacity.</p>
<p>However, the basic understanding of how lithography works is well-known. China has pushed its existing ASML equipment beyond its original capabilities and is pioneering an <a href="https://medium.com/@thechinaacademy/china-may-be-constructing-euv-lithography-machines-on-a-massive-scale-da796ea1af73">innovative approach to lithography</a> that could see China mass-producing high-end semiconductors in the future.</p>
<h2>Chinese education prowess</h2>
<p>Most importantly, scientific knowledge cannot be contained and China has made extraordinary gains in its educational system. </p>
<p>Chinese high schoolers in four affluent provinces <a href="https://archive.ph/3KGuE">score the highest in the world in reading, science and mathematics</a>. According to <em>Times Higher Education</em>, Chinese universities are <a href="https://www.timeshighereducation.com/china-subject-ratings-2021-china-outperforms-rest-world">“outperforming institutions in the rest of the world in the vast majority of disciplines</a>.”</p>
<p><a href="https://www.usnews.com/education/best-global-universities/engineering">The <em>U.S. News & World Report</em> has ranked six of the top 10 (and 11 of the top 20) engineering schools in the world</a>, and they’re in China, with Tsinghua University in Beijing in first place. Only two of the top 10 are American. <a href="https://asiatimes.com/2023/09/china-dominates-in-high-quality-natural-science-research/">China is also projected to produce 77,000 science, technology, engineering and math (STEM) graduates by 2025</a>, more than double that of the U.S. </p>
<p>China has been saddled with the stereotype that <a href="https://hbr.org/2014/03/why-china-cant-innovate">it cannot innovate</a>. But in 2022, <a href="https://doi.org/10.1038/d41586-023-01705-7">China overtook the U.S. for the first time</a> as the country or territory publishing the most research articles in prestigious natural science journals. </p>
<p>China closed the gap remarkably quickly, <a href="https://asiatimes.com/2023/09/china-dominates-in-high-quality-natural-science-research/">increasing its share of scientific articles</a> by 21 per cent since 2021 and 152 per cent since 2016. </p>
<p>According to Japan’s National Institute of Science and Technology Policy, <a href="https://www.theguardian.com/world/2022/aug/11/china-overtakes-the-us-in-scientific-research-output">China published the highest number of scientific research papers annually between 2018 and 2020</a>, and had 27.2 per cent of the world’s top one per cent of the most frequently cited papers, compared to 24.9 per cent for the U.S. </p>
<p>A survey done by the <a href="https://www.aspi.org.au/report/critical-technology-tracker">Australian Strategic Policy Institute</a> determined that China is leading in 37 of 44 cutting-edge technologies, including nanoscale materials and synthetic biology. China is also <a href="https://asiatimes.com/2023/09/china-using-industrial-robots-at-12x-us-rate/">using industrial robots at 12 times the rate</a> as the U.S. </p>
<h2>Cannot be cut off</h2>
<p>This is not a country that <a href="https://www.wsj.com/articles/u-s-sanctions-drive-chinese-firms-to-advance-ai-without-latest-chips-f6aed67f">can be contained by cutting it off from technology</a>. When it comes to the use and production of knowledge-based industries, China has more advantages than any other country in the world. </p>
<p>American actions will create a new generation of Chinese high-tech firms that will compete directly with the U.S. and western businesses from whom they used to buy their products. These firms will produce more affordable products than their western counterparts, and <a href="https://asiatimes.com/2022/11/blocked-in-the-west-huawei-eyes-emerging-markets/">could dominate technological infrastructure</a> in the Global South.</p>
<p>Chinese electric vehicles are the <a href="https://asia.nikkei.com/Spotlight/Electric-cars-in-China/China-s-GAC-breaks-1-000-km-range-barrier-with-new-EV">most advanced</a> <a href="https://www.nytimes.com/2023/12/08/business/china-electric-vehicles.html">in the world</a>, and <a href="https://www.thecooldown.com/green-tech/byd-seagull-ev-cheap-electric-car/">spreading to the rest of the globe</a>. Even as direct U.S.-China trade has declined, <a href="https://www.worldbank.org/en/news/feature/2023/08/29/protectionism-is-failing-to-achieve-its-goals-and-threatens-the-future-of-critical-industries">China’s overall importance to world trade has increased</a>. </p>
<p>Over the past year, <a href="https://www.nytimes.com/2023/03/14/opinion/ezra-klein-podcast-dan-wang.html">numerous pundits</a> <a href="https://www.nytimes.com/2023/08/26/opinion/china-economy-xi-jinping.html">have declared</a> <a href="https://www.nytimes.com/2023/08/24/business/china-economy-safety-net.html">that China’s economic collapse</a> is imminent. There’s no question <a href="https://asiatimes.com/2023/11/chinas-economic-miracle-turns-to-fiscal-crisis/">China is experiencing economic headwinds</a> as it deals with deflationary pressures linked to real estate, high local government debt and reduced consumer confidence. </p>
<h2>No collapse imminent</h2>
<p>But China’s critics have been predicting its collapse for decades. China keeps confounding them, and <a href="https://foreignpolicy.com/2023/10/17/china-economy-optimists-property/?tpcc=recirc_trending062921">it probably will once again</a>. <a href="https://www.reuters.com/world/china/imf-upgrades-chinas-2023-2024-gdp-growth-forecasts-2023-11-07/">The International Monetary Fund (IMF) has adjusted China’s predicted GDP growth rate upwards for 2023 to 5.4 per cent, and expects 4.6 per cent growth in 2024</a>. </p>
<p>The IMF expects China’s growth to continue slowing in the future, but this forecast doesn’t account for the technological potential that the country is unlocking. </p>
<p>China may be using the present debt crisis <a href="https://asiatimes.com/2023/08/property-shakeout-beijings-tool-to-fight-fiefdoms/">to redirect domestic investment</a> away from a volatile property market and towards a productive and sustainable high tech economy. </p>
<p>If so, American efforts to stifle China may have created the conditions needed to ensure its success.</p><img src="https://counter.theconversation.com/content/219158/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Shaun Narine does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Chinese technology advancements cannot be contained, and the country is increasingly an education and research powerhouse.Shaun Narine, Professor of International Relations and Political Science, St. Thomas University (Canada)Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2165812023-11-22T17:05:13Z2023-11-22T17:05:13ZThe vast majority of us have no idea what the padlock icon on our internet browser is – and it’s putting us at risk<figure><img src="https://images.theconversation.com/files/559630/original/file-20231115-15-zfe1h.jpg?ixlib=rb-1.1.0&rect=50%2C0%2C5568%2C3692&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The padlock icon which appears in most internet browser address bars. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/web-browser-closeup-on-lcd-screen-1353121223">Robert Avgustin/Shutterstock</a></span></figcaption></figure><p>Do you know what the padlock symbol in your internet browser’s address bar means? If not, you’re not alone. <a href="https://www.tandfonline.com/doi/full/10.1080/10447318.2023.2266789">New research</a> by my colleagues and I shows that only 5% of UK adults understand the padlock’s significance. This is a threat to our online safety. </p>
<p>The padlock symbol on a web browser simply means that the data being sent between the web server and the user’s computer is encrypted and cannot be read by others. But when we asked people what they thought it meant, we received an array of incorrect answers.</p>
<p>In our study, we asked a cross section of 528 web users, aged between 18 and 86 years of age, a number of questions about the internet. Some 53% of them held a bachelor’s degree or above and 22% had a college certificate, while the remainder had no further education. </p>
<p>One of our questions was: “On the Google Chrome browser bar, do you know what the padlock icon represents/means?” </p>
<p>Of the 463 who responded, 63% stated they knew, or thought they knew, what the padlock symbol on their web browser meant, but only 7% gave the correct meaning. Respondents gave us a range of incorrect interpretations, believing among other things that the padlock signified a secure web page or that the website is safe and doesn’t contain any viruses or suspicious links. Others believed the symbol means a website is “trustworthy”, is not harmful, or is a “genuine” website. </p>
<figure class="align-left ">
<img alt="A symbol of a circle next to a straight line over a straight line and a circle." src="https://images.theconversation.com/files/559903/original/file-20231116-19-zm7pen.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/559903/original/file-20231116-19-zm7pen.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/559903/original/file-20231116-19-zm7pen.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/559903/original/file-20231116-19-zm7pen.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/559903/original/file-20231116-19-zm7pen.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/559903/original/file-20231116-19-zm7pen.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/559903/original/file-20231116-19-zm7pen.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Google’s new ‘tune icon’ which replaces the padlock icon in Chrome’s address bar.</span>
<span class="attribution"><a class="source" href="https://blog.chromium.org/2023/05/an-update-on-lock-icon.html">Google Chromium</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Not understanding symbols like the padlock icon, can pose problems to internet users. These include increased security risks and simply hindering effective use of the technology.</p>
<p>Our findings corroborate research by <a href="https://support.google.com/chrome/thread/222182314/the-lock-icon-replaced-with-a-tune-icon-in-the-google-chrome-address-bar?hl=en">Google</a> itself, who in September, replaced the padlock icon with a <a href="https://www.thesslstore.com/blog/google-to-replace-the-padlock-icon-in-chrome-version-117/#:%7E:text=But%20that's%20about%20to%20change,to%20have%20HTTPS%20by%20default.">neutral symbol</a> described as a “tune icon”. In doing so, Google hopes to eradicate the misunderstandings that the padlock icon has afforded. </p>
<p>However, Google’s update now raises the question as to whether other web browser companies will join forces to ensure their designs are uniform and intuitive across all platforms.</p>
<h2>Web browser evolution</h2>
<p>Without a doubt, the browser, which is our point of entry to the world wide web, comes with a lot of responsibility on the part of web companies. It’s how we now visit web pages, so the browser has become an integral part of our daily lives. </p>
<p>It’s intriguing to look back and trace the evolution of the web’s design from the early 1990s to where we are today. Creating software that people wanted to use and found effective was at the heart of this <a href="https://www.interaction-design.org/literature/topics/human-computer-interaction">evolution</a>. The creation of functioning, satisfying, and most importantly, consistently designed user interfaces was an important goal in the 1990s. In fact, there was a drive in those early days to create web interface designs that were so consistent and intuitive that users would not need to think too much about how they work. </p>
<p>Nowadays, it’s a different story because the challenge is centred on helping people to think before they interact online. In light of this, it seems bizarre that the design of the web browser in 2023 still affords uncertainty through its design. Worse still, that it is inconsistently presented across its different providers. </p>
<p>It could be argued that this stems from the <a href="https://www.investopedia.com/ask/answers/09/browser-wars-netscape-internet-explorer.asp">browser wars</a> of the mid-1990s. That’s when the likes of Microsoft and former software company, Netscape, tried to outdo each other with faster, better and more unique products. The race to be distinct meant there was inconsistency between products. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/LOWOLJci8d8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The rise and fall of Netscape and the browser wars of the 1990s.</span></figcaption>
</figure>
<h2>Internet safety</h2>
<p>However, introducing distinct browser designs can lead to user confusion, misunderstanding and a false sense of security, especially when it is <a href="https://www.interaction-design.org/literature/article/principle-of-consistency-and-standards-in-user-interface-design">now widely known</a> that such inconsistency can breed confusion, and from that, frustration and lack of use. </p>
<p>As an expert in human-computer interaction, it is alarming to me that some browser companies continue to disregard <a href="https://www.nngroup.com/articles/ten-usability-heuristics/">established guidelines</a> for usability. In a world where web browsers open the doors to potentially greater societal risks than the offline world, it is crucial to establish a consistent approach for addressing these dangers. </p>
<p>As a minimum, we need web browser companies to join forces in a concerted effort to shield users, or at the very least, heighten their awareness regarding potential online risks. This should include formulating one unified design across the board that affords an enriched and safe user experience.</p><img src="https://counter.theconversation.com/content/216581/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Fiona Carroll does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The padlock symbol simply means that the data being sent between the web server and the user’s computer is encrypted and cannot be read by others. But many people don’t know that.Fiona Carroll, Reader in Human Computer Interaction, Cardiff Metropolitan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2133062023-11-17T13:29:43Z2023-11-17T13:29:43ZWhat is quantum advantage? A quantum computing scientist explains an approaching milestone marking the arrival of extremely powerful computers<figure><img src="https://images.theconversation.com/files/559476/original/file-20231114-21-dv3rca.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5731%2C3829&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">IBM's quantum computer got President Joe Biden's attention.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/president-joe-biden-looks-at-quantum-computer-as-he-tours-news-photo/1243772280">Mandel Ngan/AFP via Getty Images</a></span></figcaption></figure><p>Quantum advantage is the milestone the field of quantum computing is fervently working toward, where a quantum computer can solve problems that are beyond the reach of the most powerful non-quantum, or classical, computers. </p>
<p>Quantum refers to the scale of atoms and molecules where the laws of physics as we experience them break down and a different, counterintuitive set of laws apply. Quantum computers take advantage of these strange behaviors to solve problems.</p>
<p>There are some types of problems that are <a href="https://theconversation.com/limits-to-computing-a-computer-scientist-explains-why-even-in-the-age-of-ai-some-problems-are-just-too-difficult-191930">impractical for classical computers to solve</a>, such as cracking state-of-the-art encryption algorithms. Research in recent decades has shown that quantum computers have the potential to solve some of these problems. If a quantum computer can be built that actually does solve one of these problems, it will have demonstrated quantum advantage.</p>
<p>I am <a href="https://scholar.google.com/citations?user=2J2t64gAAAAJ&hl=en">a physicist</a> who studies quantum information processing and the control of quantum systems. I believe that this frontier of scientific and technological innovation not only promises groundbreaking advances in computation but also represents a broader surge in quantum technology, including significant advancements in quantum cryptography and quantum sensing.</p>
<h2>The source of quantum computing’s power</h2>
<p>Central to quantum computing is the quantum bit, or <a href="https://quantumatlas.umd.edu/entry/qubit/">qubit</a>. Unlike classical bits, which can only be in states of 0 or 1, a qubit can be in any state that is some combination of 0 and 1. This state of neither just 1 or just 0 is known as a <a href="https://quantumatlas.umd.edu/entry/superposition/">quantum superposition</a>. With every additional qubit, the number of states that can be represented by the qubits doubles. </p>
<p>This property is often mistaken for the source of the power of quantum computing. Instead, it comes down to an intricate interplay of superposition, <a href="https://encyclopedia2.thefreedictionary.com/Quantum+Interference">interference</a> and <a href="https://theconversation.com/nobel-winning-quantum-weirdness-undergirds-an-emerging-high-tech-industry-promising-better-ways-of-encrypting-communications-and-imaging-your-body-191929">entanglement</a>.</p>
<p>Interference involves manipulating qubits so that their states combine constructively during computations to amplify correct solutions and destructively to suppress the wrong answers. Constructive interference is what happens when the peaks of two waves – like sound waves or ocean waves – combine to create a higher peak. Destructive interference is what happens when a wave peak and a wave trough combine and cancel each other out. Quantum algorithms, which are few and difficult to devise, set up a sequence of interference patterns that yield the correct answer to a problem.</p>
<p>Entanglement establishes a uniquely quantum correlation between qubits: The state of one cannot be described independently of the others, no matter how far apart the qubits are. This is what Albert Einstein famously dismissed as “spooky action at a distance.” Entanglement’s collective behavior, orchestrated through a quantum computer, enables computational speed-ups that are beyond the reach of classical computers.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/jHoEjvuPoB8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The ones and zeros – and everything in between – of quantum computing.</span></figcaption>
</figure>
<h2>Applications of quantum computing</h2>
<p>Quantum computing has a range of potential uses where it can outperform classical computers. In cryptography, quantum computers pose both an opportunity and a challenge. Most famously, they have the <a href="https://theconversation.com/is-quantum-computing-a-cybersecurity-threat-107411">potential to decipher current encryption algorithms</a>, such as the widely used <a href="https://www.britannica.com/topic/RSA-encryption">RSA scheme</a>. </p>
<p>One consequence of this is that today’s encryption protocols need to be reengineered to be resistant to future quantum attacks. This recognition has led to the burgeoning field of <a href="https://www.nist.gov/programs-projects/post-quantum-cryptography">post-quantum cryptography</a>. After a long process, the National Institute of Standards and Technology recently selected four quantum-resistant algorithms and has begun the process of readying them so that organizations around the world can use them in their encryption technology.</p>
<p>In addition, quantum computing can dramatically speed up quantum simulation: the ability to predict the outcome of experiments operating in the quantum realm. Famed physicist Richard Feynman <a href="https://doi.org/10.1007/BF02650179">envisioned this possibility</a> more than 40 years ago. Quantum simulation offers the potential for considerable advancements in chemistry and materials science, aiding in areas such as the intricate modeling of molecular structures for drug discovery and enabling the discovery or creation of materials with novel properties. </p>
<p>Another use of quantum information technology is <a href="https://doi.org/10.1103/RevModPhys.89.035002">quantum sensing</a>: detecting and measuring physical properties like electromagnetic energy, gravity, pressure and temperature with greater sensitivity and precision than non-quantum instruments. Quantum sensing has myriad applications in fields such as <a href="https://www.azoquantum.com/Article.aspx?ArticleID=444">environmental monitoring</a>, <a href="https://doi.org/10.1038/s41586-021-04315-3">geological exploration</a>, <a href="https://doi.org/10.1038/s42254-023-00558-3">medical imaging</a> and <a href="https://www.defenseone.com/ideas/2022/06/quantum-sensorsunlike-quantum-computersare-already-here/368634/">surveillance</a>.</p>
<p>Initiatives such as the development of a quantum internet that interconnects quantum computers are crucial steps toward bridging the quantum and classical computing worlds. This network could be secured using quantum cryptographic protocols such as quantum key distribution, which enables ultra-secure communication channels that are protected against computational attacks – including those using quantum computers.</p>
<p>Despite a growing application suite for quantum computing, developing new algorithms that make full use of the quantum advantage – in particular <a href="https://journals.aps.org/prxquantum/pdf/10.1103/PRXQuantum.3.030101">in machine learning</a> – remains a critical area of ongoing research.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a metal apparatus with green laser light in the background" src="https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/559489/original/file-20231115-29-uo273g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A prototype quantum sensor developed by MIT researchers can detect any frequency of electromagnetic waves.</span>
<span class="attribution"><a class="source" href="https://news.mit.edu/2022/quantum-sensor-frequency-0621">Guoqing Wang</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<h2>Staying coherent and overcoming errors</h2>
<p>The quantum computing field faces significant hurdles in hardware and software development. Quantum computers are highly sensitive to any unintentional interactions with their environments. This leads to the phenomenon of decoherence, where qubits rapidly degrade to the 0 or 1 states of classical bits. </p>
<p>Building large-scale quantum computing systems capable of delivering on the promise of quantum speed-ups requires overcoming decoherence. The key is developing effective methods of suppressing and correcting quantum errors, <a href="http://www.cambridge.org/9780521897877">an area my own research is focused on</a>.</p>
<p>In navigating these challenges, numerous quantum hardware and software startups have emerged alongside well-established technology industry players like Google and IBM. This industry interest, combined with significant investment from governments worldwide, underscores a collective recognition of quantum technology’s transformative potential. These initiatives foster a rich ecosystem where academia and industry collaborate, accelerating progress in the field.</p>
<h2>Quantum advantage coming into view</h2>
<p>Quantum computing may one day be as disruptive as the arrival of <a href="https://memberservices.theconversation.com/newsletters/?nl=ai">generative AI</a>. Currently, the development of quantum computing technology is at a crucial juncture. On the one hand, the field has already shown early signs of having achieved a narrowly specialized quantum advantage. <a href="https://www.nature.com/articles/s41586-019-1666-5">Researchers at Google</a> and later a <a href="https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.127.180501">team of researchers in China</a> demonstrated quantum advantage <a href="https://doi.org/10.1038/s41534-023-00703-x">for generating a list of random numbers</a> with certain properties. My research team demonstrated a quantum speed-up <a href="https://doi.org/10.1103/PhysRevLett.130.210602">for a random number guessing game</a>.</p>
<p>On the other hand, there is a tangible risk of entering a “quantum winter,” a period of reduced investment if practical results fail to materialize in the near term.</p>
<p>While the technology industry is working to deliver quantum advantage in products and services in the near term, academic research remains focused on investigating the fundamental principles underpinning this new science and technology. This ongoing basic research, fueled by enthusiastic cadres of new and bright students of the type I encounter almost every day, ensures that the field will continue to progress.</p><img src="https://counter.theconversation.com/content/213306/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Daniel Lidar receives funding from the NSF, DARPA, ARO, and DOE.</span></em></p>Several companies have made quantum computers, but these early models have yet to demonstrate quantum advantage: the ability to outstrip ordinary supercomputers.Daniel Lidar, Professor of Electrical Engineering, Chemistry, and Physics & Astronomy, University of Southern CaliforniaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2141752023-10-04T17:08:29Z2023-10-04T17:08:29ZThe case for continuing to write by hand<figure><img src="https://images.theconversation.com/files/549769/original/file-20230910-152774-othbu7.jpg?ixlib=rb-1.1.0&rect=101%2C93%2C5106%2C3373&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/es/image-photo/girls-hands-pen-writing-on-notebook-435712027">Ivan Kruk/Shutterstock</a></span></figcaption></figure><p>At the beginning of 1882, the philosopher Friedrich Nietzsche received a machine called the “Malling-Hansen Writing Ball”, a nifty little gadget covered with keys. The thinker’s eyesight had been getting worse, to the point where he could no longer write by hand. In March of the same year he was able to continue writing thanks to this new instrument.</p>
<p>His friend, the composer Heinrich Köselitz, noted a certain stylistic change in his texts from that time onwards, as his prose became more terse and succinct. It appears that <a href="https://psycnet.apa.org/record/2011-11288-000">the content of some of his philosophy was also affected</a>. </p>
<figure class="align-center ">
<img alt="Malling-Hansen writing ball, as used by F. Nietzsche following his loss of eyesight. The keyboard is oval in shape, and the roll that holds the paper is inserted underneath." src="https://images.theconversation.com/files/545950/original/file-20230901-19-tifmah.jpg?ixlib=rb-1.1.0&rect=5%2C20%2C3391%2C2261&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/545950/original/file-20230901-19-tifmah.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=404&fit=crop&dpr=1 600w, https://images.theconversation.com/files/545950/original/file-20230901-19-tifmah.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=404&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/545950/original/file-20230901-19-tifmah.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=404&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/545950/original/file-20230901-19-tifmah.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=507&fit=crop&dpr=1 754w, https://images.theconversation.com/files/545950/original/file-20230901-19-tifmah.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=507&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/545950/original/file-20230901-19-tifmah.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=507&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Malling-Hansen writing ball, as used by F. Nietzsche following his loss of eyesight.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Writing_ball,_Rasmus_Malling-Hansen,_1870s,_TM8438,_view_2_-_Tekniska_museet_-_Stockholm,_Sweden_-_DSC01586.JPG">Wikicommons Media /Tekniska museet, Estocolmo, Suecia.</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<h2>Embodied cognition</h2>
<p>This change –a reminder that <a href="https://doi.org/10.4324/9781315512211">“the medium is the message”</a>– might be explained by new theories of <a href="https://doi.org/10.3758/BF03196322">embodied cognition</a>. Modern cognitive science indicates that motor and sensory aspects of our behaviour have consequences that reach further than we may, at first, suspect. </p>
<p>Our minds are not filled with disembodied ideas. We need to take our bodies and senses into account in order to explain phenomena that are otherwise inexplicable.</p>
<p>In <a href="https://www.jstor.org/stable/40064064">a study</a> carried out almost two decades ago, subjects were presented with words carrying a positive connotation (such as “sweet”) or negative (such as “rubbish”). Subjects had to indicate whether a word was “good” or “bad” by moving a joystick. Half of the subjects were told to indicate that a word was “good” by pulling the joystick towards their bodies, and the other half were told to indicate “good” by pushing it away.</p>
<p>To indicate that a word was “bad”, the subjects made the opposite movement to “good”. A consistent correlation was observed between meaning and movement: the quickest responses were produced by the subjects who were told to indicate “good” by pulling the joystick towards themselves, and to indicate “bad” by pushing it away. </p>
<p>This direct involvement of the body and senses in mental processes can explain how writing by hand (as opposed to other types of writing) helps us to learn letters and words. This is backed up by the results of various studies, which we will now present.</p>
<h2>Memory ability</h2>
<p>As an example, a <a href="https://doi.org/10.1007/s11145-021-10137-6">2021 study</a> compared short and mid term recall of words learned either by typing or writing by hand. Recall was better when words had been learned with a pencil and paper.</p>
<p>In <a href="https://doi.org/10.1016/j.humov.2006.07.007">another study</a>, a small group of adults learned new symbols (from a language of Asian origin) that they then had to reproduce with a pencil or a keyboard. When they had finished learning the symbols, there were no differences in recall between the two methods, but the keyboard users forgot a significant amount of what they had learned as time passed by. </p>
<p><a href="https://doi.org/10.17239/jowr-2015.07.02.1">Some experts</a> suggest that learning by pen and paper is better because it is more “embodied”. This means that it involves a set of more complex sensory-motor processes particular to each letter. This complexity leaves a more distinctive footprint in our memories and, as a result, makes things easier to memorise and recall, much like how a two metre tall supermodel might be more memorable than someone with average features. </p>
<h2>Mental resources</h2>
<p>Another benefit to writing by hand is that is puts less strain on our brains. Typing puts more demand on our mental resources, causing difficulties when it comes to word memorisation tasks. The increased mental load is due to the greater speed at which words are typed. This, in turn, means there are more individual elements to process over a certain period of time.</p>
<p>The idea of complexity appears in other studies to explain similar differences when writing in a second language. <a href="https://doi.org/10.3390/languages7030195">Other authors</a> have also suggested that more mental resources are used because writing is more complex when performed on the keyboard. </p>
<h2>Planning and composition</h2>
<p>The way we write can also affect higher levels of word processing in our brains. It is thought that the type of writing sets the pace of cognitive processing – if the writing process is slow and cumbersome it can interfere with the higher processes on which it depends, causing ideas and relevant information to be forgotten.</p>
<p>In several experiments, an improvement has been noted in overall written composition (quality, length and fluidity of text) after subjects completed handwriting exercises. In the same vein, <a href="https://doi.org/10.1177/105065190101500402">a study</a> has shown that students plan their texts more carefully when using pen and paper than they do when using a keyboard. </p>
<h2>Longer and better quality texts</h2>
<p>Nevertheless, it must be recognised that keyboards have also brought us many advantages. For example, some <a href="https://doi.org/10.1007/s11145-010-9292-5">meta analyses</a> comparing hand written texts with those written on a computer found the latter to often be longer and better composed.</p>
<p>In addition, <a href="https://doi.org/10.1007/s11145-010-9292-5">people with literacy problems are those who feel these benefits most acutely</a>. However, some of <a href="https://ejournals.bc.edu/index.php/jtla/article/view/1661">these studies</a> lack experimental rigour, which raises doubts about their clarity and general usefulness.</p>
<h2>A disembodied mind</h2>
<p>The Matrix film tetralogy presents the idea of a mind without a body. But a disembodied mind that exists only in the virtual world is much like Plato’s cave. The torches that cast shadows on the wall are replaced by computer simulations. In real life, however, the mind performs its operations alongside the functioning of our entire body.</p>
<p>We are closer to a purely symbolic, disembodied mind when we type on the keyboard (a device that works with minimal sensory involvement) than when we put pen to paper, as our cognitive system reacts differently depending on the motor and sensory inputs that we experience.</p>
<h2>Implications for education</h2>
<p>The results of these studies all indicate that eliminating handwriting from school – as has reportedly been done in <a href="https://www.bbc.com/future/article/20171108-the-uncertain-future-of-handwriting">Finland</a>– is not a good idea.</p>
<p>Obviously, this does not mean that we should stop using keyboards, not even in infancy. However, we must be on the lookout for future studies on this topic, and act on their repercussions in the education sector as promptly as possible.</p><img src="https://counter.theconversation.com/content/214175/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Javier Marín Serrano receives funding from the Spanish Ministry of Science and Innovation.</span></em></p><p class="fine-print"><em><span>Miguel Ángel Pérez-Sánchez receives funding from the Spanish Ministry of Science and Innovation.</span></em></p><p class="fine-print"><em><span>Olena Vasylets no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.</span></em></p>How does typing affect the way we process what we write, and what are the consequences of giving up handwriting at school age?Javier Marín Serrano, Profesor Titular de Universidad. Psicología del Lenguaje. Psicología del Pensamiento, Universidad de MurciaMiguel Ángel Pérez-Sánchez, Profesor Titular del Dpto. de Psicología Básica y Metodología, Universidad de MurciaOlena Vasylets, Profesora asociada, Facultad de Filología y Comunicación, Universitat de BarcelonaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2068632023-06-02T02:35:46Z2023-06-02T02:35:46ZHow should Australia capitalise on AI while reducing its risks? It’s time to have your say<figure><img src="https://images.theconversation.com/files/529701/original/file-20230602-19-933y42.jpg?ixlib=rb-1.1.0&rect=253%2C84%2C5913%2C3572&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/3d-illustration-artificial-neuron-concept-intelligence-592921421">Shutterstock</a></span></figcaption></figure><p>The world missed the boat with social media. It fuelled misinformation, fake news, and polarisation. We saw the harms too late, once they had already started to have a substantive impact on society. </p>
<p>With artificial intelligence – especially generative AI – we’re earlier to the party. Not a day goes by without a <a href="https://www.theverge.com/2023/3/27/23657927/ai-pope-image-fake-midjourney-computer-generated-aesthetic">new deepfake</a>, open letter, product release or interview raising the public’s concern. </p>
<p>Responding to this, the Australian government has just <a href="https://www.industry.gov.au/news/responsible-ai-australia-have-your-say">released two important documents</a>. One is a report commissioned by the National Science and Technology Council (NSTC) on the opportunities and risks posed by generative AI, and the other is a consultation paper asking for input on possible regulatory and policy responses to those risks.</p>
<p>I was one of the external reviewers of the NSTC report. I’ve read both documents carefully so you don’t have to. Here’s what you need to know.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/no-ai-probably-wont-kill-us-all-and-theres-more-to-this-fear-campaign-than-meets-the-eye-206614">No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye</a>
</strong>
</em>
</p>
<hr>
<h2>Trillions of life-changing opportunities</h2>
<p>With AI, we see a multi-trillion dollar industry coming into existence before our eyes – and Australia could be well-placed to profit.</p>
<p>In the last few months, two local <a href="https://www.investopedia.com/terms/u/unicorn.asp">unicorns</a> (billion dollar companies) pivoted to AI. Online graphic design company Canva introduced its “magic” AI tools to generate and edit content, and software development company Atlassian introduced “Atlassian intelligence” – a new virtual teammate to help with tasks such as summarising meetings and answering questions. </p>
<p>These are just two examples. We see many other opportunities across industry, government, education and health. </p>
<p>AI tools to predict early signs of Parkinson’s disease? <a href="https://theconversation.com/heres-how-a-new-ai-tool-may-predict-early-signs-of-parkinsons-disease-205221">Tick</a>. AI tools to predict when solar storms will hit? <a href="https://www.popularmechanics.com/space/a43895983/nasa-ai-model-warns-of-solar-storms/">Tick</a>. Checkout-free, grab-and-go shopping, courtesy of AI? <a href="https://www.afl.com.au/news/896835/afl-and-amazon-to-introduce-just-walk-out-technology-at-marvel-stadium">Tick</a>. </p>
<p>The list of ways AI can improve our lives seems endless.</p>
<h2>What about the risks?</h2>
<p>The NSTC report outlines the most obvious risks: job displacement, misinformation and polarisation, wealth concentration and regulatory misalignment. </p>
<p>For example, are entry level lawyers going to be replaced by robots? Are we going to drown in a sea of deepfakes and computer generated tweets? Will big tech companies capture even more wealth? And how can little old Australia have a say on global changes?</p>
<p>The Australian government’s consultation paper looks at how different nations are responding to these challenges. This includes the US, which is adopting a light touch approach with voluntary codes and standards; the UK, which looks to empower existing sector-specific regulators; and Europe’s forthcoming AI Act, which is one of the first AI-specific regulations. </p>
<p>Europe’s approach is worth watching if their previous data protection law – the General Data Protection Regulation (GDPR) – is anything to go by. The GDPR has become somewhat viral; 17 countries outside of Europe now have similar privacy laws.</p>
<p>We can expect the <a href="https://theconversation.com/calls-to-regulate-ai-are-growing-louder-but-how-exactly-do-you-regulate-a-technology-like-this-203050">European Union’s AI Act</a> to set a similar precedent on how to regulate AI. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/529697/original/file-20230602-17-rvxpbm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Photo of a person reading about personal data regulation on their smartphone" src="https://images.theconversation.com/files/529697/original/file-20230602-17-rvxpbm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/529697/original/file-20230602-17-rvxpbm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/529697/original/file-20230602-17-rvxpbm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/529697/original/file-20230602-17-rvxpbm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/529697/original/file-20230602-17-rvxpbm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=504&fit=crop&dpr=1 754w, https://images.theconversation.com/files/529697/original/file-20230602-17-rvxpbm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=504&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/529697/original/file-20230602-17-rvxpbm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=504&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The European Union’s GDPR regulations came into effect on May 25 2018, and have become a model for other nations around the world.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/general-data-protection-regulation-closeup-human-1100627276">Shutterstock</a></span>
</figcaption>
</figure>
<p>Indeed, the Australian government’s consultation paper specifically asks if we should adopt a similar risk and audit-based approach as the AI Act. The Act outlaws high-risk AI applications, such as AI-driven social scoring systems (<a href="https://www.businessinsider.com/china-social-credit-system-punishments-and-rewards-explained-2018-4">like the system in use in China</a>) and real-time remote biometric identification systems used by law enforcement in public spaces. It allows other riskier applications only after suitable safety audits.</p>
<p>China stands <a href="https://www.theverge.com/2023/4/12/23680027/china-generative-ai-regulations-promote-socialism-chatgpt-alibaba-baidu">somewhat apart</a> as far as regulating AI goes. It proposes to implement very strict rules, which would require AI-generated content to reflect the “core value of socialism”, “respect social morality and public order”, and not “subvert state power”, “undermine national unity” or encourage “violence, extremism, terrorism or discrimination”. </p>
<p>In addition, AI tools will need to go through a “security review” before release, and verify users’ identities and track usage. </p>
<p>It seems unlikely Australia will have the appetite for such strict state control over AI. Nonetheless, China’s approach reinforces how powerful AI is going to be, and how important it is to get right. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-ai-and-other-technologies-are-already-disrupting-the-workplace-206327">How AI and other technologies are already disrupting the workplace</a>
</strong>
</em>
</p>
<hr>
<h2>Existing rules</h2>
<p>As the government’s consultation paper notes, AI is already subject to existing rules. These include general regulations (such as privacy and consumer protection laws that apply across industries) and sector-specific regulations (such as those that apply to financial services or therapeutic goods). </p>
<p>One of the major goals of the consultation is to decide whether to strengthen these rules or, as the EU has done, to introduce specific AI risk-based regulation – or perhaps some mixture of these two approaches. </p>
<p>Government itself is a (potential) major user of AI and therefore has a big role to play in setting regulation standards. For example, procurement rules used by government can become de facto rules across other industries.</p>
<h2>Missing the boat</h2>
<p>The biggest risk, in my view, is that Australia misses this opportunity. </p>
<p>A few weeks ago, when the UK government <a href="https://www.theguardian.com/technology/2023/mar/15/uk-to-invest-900m-in-supercomputer-in-bid-to-build-own-britgpt">announced its approach</a> to deal with the risks of AI, it also announced an additional £1 billion of investment in AI, alongside the several billion pounds already committed.</p>
<p>We’ve not seen any such ambition from the Australian government. </p>
<p><a href="https://ellenmacarthurfoundation.org/articles/why-public-sector-innovation-is-the-reason-your-iphone-is-so-smart">The technologies that</a> gave us the iPhone, the internet, GPS, and wifi came about because of government investment in fundamental research and training for scientists and engineers. They didn’t come into existence because of venture funding in Silicon Valley.</p>
<p>We’re still waiting to see the government invest millions (or even billions) of dollars in fundamental research, and in the scientists and engineers that will allow Australia to compete in the AI race. There is still everything to play for.</p>
<p>AI is going to touch everyone’s lives, so I strongly encourage you to <a href="https://consult.industry.gov.au/supporting-responsible-ai">have your say</a>. You only have eight weeks to do so.</p><img src="https://counter.theconversation.com/content/206863/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh receives funding from the Australian Research Council via an ARC Laureate Fellowship in Trustworthy AI. He was an external reviewer of the NSTC Rapid Response Information Report on Generative AI.</span></em></p>The artificial intelligence boom means a multi-trillion dollar industry is coming into existence before our eyes. With great opportunity come great risks, as two important new Australian reports show.Toby Walsh, Professor of AI, Research Group Leader, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2033022023-04-09T12:07:37Z2023-04-09T12:07:37ZGiving Canadians the ‘right to repair’ empowers consumers, supports competition and benefits the environment<figure><img src="https://images.theconversation.com/files/519551/original/file-20230405-18-gpwqb.jpg?ixlib=rb-1.1.0&rect=10%2C0%2C6698%2C3480&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The right to repair means that consumer goods can be fixed and maintained by anyone.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>On March 28, <a href="https://www.budget.canada.ca/2023/home-accueil-en.html#downloads">the Canadian government’s budget announcement</a> introduced a plan to implement a “<a href="https://arstechnica.com/gadgets/2023/03/right-to-repair-rules-for-electronics-appliances-targeted-for-2024-canada-says/">right to repair</a>” for electronic devices and home appliances in 2024, alongside a new five-year tax credit worth $4.5 billion for Canadian clean tech manufacturers. The federal government will begin consultations on the plan in the summer.</p>
<p>The <a href="https://www.ifixit.com/News/61140/what-is-right-to-repair">right to repair</a> allows consumers to repair goods themselves or have them repaired by original equipment manufacturers (OEMs) or at independent repair shops. Key elements of the right is that repair manuals, tools, replacement parts and services must be available at competitive prices.</p>
<p>Right-to-repair movements have sprung up in the <a href="https://www.repair.org/aboutus">United States</a>, <a href="https://repair.eu/about/">Europe</a>, <a href="https://www.right2repair.org.za/home/">South Africa</a>, <a href="https://www.cleanup.org.au/australianrepairnetwork">Australia</a> and <a href="https://policyoptions.irpp.org/magazines/may-2021/canada-needs-right-to-repair-legislation/">Canada</a>, encompassing a range of products. Most familiar might be efforts to allow consumers to choose independent shops to repair their phones and computers. </p>
<p>But the right to repair also involves battles over who should be able to fix Internet of Things devices (all physical objects related to accessing the internet), as well as other products that function via embedded software systems, such as vehicles, agricultural equipment and medical equipment.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/519552/original/file-20230405-22-enm704.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="in a library, a man in a white cap holds a circuit board while another man walks past" src="https://images.theconversation.com/files/519552/original/file-20230405-22-enm704.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/519552/original/file-20230405-22-enm704.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/519552/original/file-20230405-22-enm704.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/519552/original/file-20230405-22-enm704.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/519552/original/file-20230405-22-enm704.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/519552/original/file-20230405-22-enm704.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/519552/original/file-20230405-22-enm704.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A volunteer repairs a circuit board in Malmo, Sweden, at a fortnightly repair café as part of an international grassroot network calling for the right to repair.</span>
<span class="attribution"><span class="source">(AP Photo/James Brooks)</span></span>
</figcaption>
</figure>
<h2>Discouraging self-repair</h2>
<p>For too long the right to repair has been a casualty of the digital economy. Many manufacturers have <a href="https://www.vice.com/en/article/xgdazj/john-deere-hit-with-class-action-lawsuit-for-alleged-tractor-repair-monopoly">long discouraged or outright prohibited</a> independent repair. They do this in part by threatening penalties for copyright infringement or by voiding warranties for products repaired by independent shops or using non-OEM parts. </p>
<p>The corporate power to deny repair is possible because <a href="https://doi.org/10.14763/2019.2.1405">companies that control the digital hearts of software-enabled products</a> can use copyright law to restrict their customers or third-party services from fixing these products. Today, this includes everything from laptops to refrigerators, vacuum cleaners, tractors and fitness wearables.</p>
<p>Identifying problems with software-enabled goods often necessitates the use of diagnostic software, while undertaking repairs often requires copying all or part of the product software. However, manufacturers’ licensing agreements typically prohibit any actions, including repair, that copy or alter the product’s software. </p>
<p>The manufacturers contend that such actions constitute copyright infringement. Companies typically cite this provision to prohibit any repairs undertaken by individuals not licensed by the original manufacturer. Companies may not actually sue customers for copyright infringement, <a href="https://www.vice.com/en/article/qjdjnv/apples-independent-repair-program-is-invasive-to-shops-and-their-customers-contract-shows">but they may target independent repair shops</a>.</p>
<p>Such tactics may discourage self-repair or the use of independent service people.</p>
<h2>Consumer pushback</h2>
<p>Questions of who can repair products and under what circumstances are fundamental to the nature of ownership and control. In fact, control over intangible forms of knowledge such as intellectual property and software-enabled goods is <a href="https://rowman.com/ISBN/9781538160879/The-New-Knowledge-Information-Data-and-the-Remaking-of-Global-Power">central to exerting power in the knowledge economy</a>.</p>
<p>The right-to-repair movement can be understood as a consumer pushback against the commodification of knowledge and a battle over who should be allowed to control and use knowledge — to repair, tinker or innovate — and in whose interests.</p>
<p>Battles over the right to repair have particular relevance for Canada. Major manufacturers, often headquartered in the U.S. or Europe, set rules regarding repair that <a href="https://www.accc.gov.au/focus-areas/market-studies/agricultural-machinery-market-study">privilege their business models</a>. These rules favour their branded suppliers and authorized repair technicians to <a href="https://www.bloomberg.com/news/articles/2022-01-20/john-deere-is-facing-a-farmer-revolt-over-the-right-to-repair#xj4y7vzkg">maximize control over repair services</a>. </p>
<p>This not only shuts out Canadian third-party businesses that supply replacement parts and repair services, but also disadvantages Canadian consumers.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/C1VDtcUrsVA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">CBC covers right to repair and the 2023 federal budget.</span></figcaption>
</figure>
<h2>Effective policy development</h2>
<p>As the Canadian government prepares for consultations on implementing the right to repair, I offer several suggestions:</p>
<p>First, policymakers should build upon right-to-repair efforts elsewhere, particularly <a href="https://www.pc.gov.au/inquiries/completed/repair#report">Australia</a>, the European Union and the U.S. </p>
<p>Australia appears to be moving toward a right to repair. Its consumer watchdog agency, the Australian Competition and Consumer Commission, <a href="https://www.accc.gov.au/focus-areas/market-studies/agricultural-machinery-market-study">studied the effects of restrictive repair practices</a> on the agricultural machinery and the after-sales market in that country in 2020.</p>
<p>The European Parliament adopted resolutions on the right to repair in 2020 and 2021, and is planning a <a href="https://www.europarl.europa.eu/legislative-train/theme-a-european-green-deal/file-right-to-repair">legislative proposal</a> on the matter by mid-2023, building upon several years of working to make manufacturing and product design more eco-friendly. </p>
<p>In the U.S., President Joe Biden strengthened the case for right to repair in July 2021 with an <a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2021/07/09/executive-order-on-promoting-competition-in-the-american-economy/">executive order supporting competition</a>. Recently, attorneys general from 28 states <a href="https://pirg.org/updates/28-attorneys-general-call-for-right-to-repair-legislation/">called on lawmakers to advance a right to repair federally</a>.</p>
<p>Second, it’s important to effectively counter industry opposition, <a href="https://maplightarchive.org/story/in-fight-over-the-right-to-repair-equipment-farmers-are-outspent-28-to-1/">which has been successful in defeating right-to-repair legislation</a>. Such legislation continues to face stiff industry opposition at the state level in the U.S. </p>
<p>Big companies in the technology, vehicle and agricultural industries have long <a href="https://www.washingtonpost.com/business/how-tech-firms-are-resisting-the-right-to-repair/2023/01/19/759b0da0-97f6-11ed-a173-61e055ec24ef_story.html">lobbied against the right to repair</a>. They argue that repairing or tinkering with their software-enabled products raises potentially serious security and safety complications. </p>
<p>Though such concerns may be valid in some cases (particularly when dealing with safety-critical goods such as medical devices), these are exceptions. In many cases, however, independent repair by appropriately trained technicians can be a safe, viable alternative to manufacturers’ “authorized” repairs.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/519558/original/file-20230405-14-h0at9h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a stall at a secondhand market with books on a table, clothes on a hangline, and household objects piled up" src="https://images.theconversation.com/files/519558/original/file-20230405-14-h0at9h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/519558/original/file-20230405-14-h0at9h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/519558/original/file-20230405-14-h0at9h.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/519558/original/file-20230405-14-h0at9h.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/519558/original/file-20230405-14-h0at9h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/519558/original/file-20230405-14-h0at9h.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/519558/original/file-20230405-14-h0at9h.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A right to repair supports the circulation of secondhand goods.</span>
<span class="attribution"><span class="source">(Jon Tyson/Unsplash)</span></span>
</figcaption>
</figure>
<p>Third, policymakers should ensure broad engagement with and representation from the people who are most affected by restrictive repair policies. These include small farmers, independent repairers, small retailers of refurbished goods, people who patronize second-hand or reseller stores, and those in the <a href="https://www.accc.gov.au/focus-areas/market-studies/agricultural-machinery-market-study">aftermarket industry</a> selling third-party parts. </p>
<p>Input is also needed from people living outside major population centres who must travel to authorized repair shops or otherwise incur costs in time and money in receiving service.</p>
<p>Fourth, it’s time to recognize that the right to repair has benefits beyond consumer rights. Repair bolsters secondary markets, including second-hand stores and resellers that provide their customers with viable used goods, <a href="https://www.ftc.gov/reports/nixing-fix-ftc-report-congress-repair-restrictions">which are important money-savers for economically marginalized communities</a>.</p>
<p>Repair also helps decrease the <a href="https://www.sierraclub.ca/en/ontario-chapter/2022-02-04/right-repair-why-we-need-it">environmental burden of modern consumerism</a>. This problem is particularly acute in the manufacture of many electronic technologies — once these products no longer function, they are dumped as e-waste, often in <a href="https://ourworld.unu.edu/en/toxic-e-waste-dumped-in-poor-nations-says-united-nations">developing countries</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/beyond-recycling-solving-e-waste-problems-must-include-designers-and-consumers-41719">Beyond recycling: solving e-waste problems must include designers and consumers</a>
</strong>
</em>
</p>
<hr>
<p>Finally, policymakers should consider a broad interpretation of the right to repair. This could include requiring manufacturers to make available at competitive prices the necessary items for repair, including diagnostic software and replacement parts. It could restrict manufacturers’ practice of <a href="https://pirg.org/articles/6-ways-to-stop-planned-obsolescence/">planned obsolescence</a>, that is, letting functional goods be rendered inoperative by withholding essential software updates.</p>
<p>The federal government is offering Canadians a chance to create a right to repair. We should seize the opportunity.</p><img src="https://counter.theconversation.com/content/203302/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Natasha Tusikov receives funding from the Social Sciences and Humanities Research Council. She is affiliated with the Centre for International Governance Innovation.</span></em></p>The Canadian government is planning to implement ‘right to repair’ for electronic devices. This has the potential to give consumers more options, and challenges corporations’ hold over knowledge.Natasha Tusikov, Associate Professor, Criminology, Department of Social Science, York University, CanadaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1994562023-03-31T10:59:27Z2023-03-31T10:59:27ZAI will soon become impossible for humans to comprehend – the story of neural networks tells us why<figure><img src="https://images.theconversation.com/files/516746/original/file-20230321-20-arhk93.jpg?ixlib=rb-1.1.0&rect=871%2C206%2C4131%2C2847&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/blockchain-network-machine-learning-deep-neural-1724030938">Shutterstock/Valentyn640</a></span></figcaption></figure><p>In 1956, during a year-long trip to London and in his early 20s, the mathematician and theoretical biologist Jack D. Cowan visited Wilfred Taylor and his strange new “<a href="https://users.sussex.ac.uk/%7Ephilh/pubs/CowanInterview.pdf">learning machine</a>”. On his arrival he was baffled by the “huge bank of apparatus” that confronted him. Cowan could only stand by and watch “the machine doing its thing”. The thing it appeared to be doing was performing an “associative memory scheme” – it seemed to be able to learn how to find connections and retrieve data.</p>
<p>It may have looked like clunky blocks of circuitry, soldered together by hand in a mass of wires and boxes, but what Cowan was witnessing was an early analogue form of a neural network – a precursor to the most advanced artificial intelligence of today, including the much discussed <a href="https://theconversation.com/uk/topics/chatgpt-130961">ChatGPT</a> with its ability to generate written content in response to almost any command. ChatGPT’s underlying technology is a neural network. </p>
<p>As Cowan and Taylor stood and watched the machine work, they really had no idea exactly how it was managing to perform this task. The answer to Taylor’s mystery machine brain can be found somewhere in its “analog neurons”, in the associations made by its machine memory and, most importantly, in the fact that its automated functioning couldn’t really be fully explained. It would take decades for these systems to find their purpose and for that power to be unlocked.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/516907/original/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Black and white image of a man sitting down." src="https://images.theconversation.com/files/516907/original/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/516907/original/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=456&fit=crop&dpr=1 600w, https://images.theconversation.com/files/516907/original/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=456&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/516907/original/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=456&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/516907/original/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=573&fit=crop&dpr=1 754w, https://images.theconversation.com/files/516907/original/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=573&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/516907/original/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=573&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Jack Cowan, who played a key part in the development of neural networks from the 1950s onwards.</span>
<span class="attribution"><a class="source" href="https://photoarchive.lib.uchicago.edu/db.xqy?one=apf1-12037.xml">University of Chicago Photographic Archive, Hanna Holborn Gray Special Collections Research Center.</a></span>
</figcaption>
</figure>
<p>The term neural network incorporates a wide range of systems, yet centrally, <a href="https://www.ibm.com/topics/neural-networks">according to IBM</a>, these “neural networks – also known as artificial neural networks (ANNs) or simulated neural networks (SNNs) – are a subset of machine learning and are at the heart of deep learning algorithms”. Crucially, the term itself and their form and “structure are inspired by the human brain, mimicking the way that biological neurons signal to one another”.</p>
<p>There may have been some residual doubt of their value in its initial stages, but as the years have passed AI fashions have swung firmly towards neural networks. They are now often understood to be the future of AI. They have big implications for us and for what it means to be human. We have heard <a href="https://techcrunch.com/2023/03/28/1100-notable-signatories-just-signed-an-open-letter-asking-all-ai-labs-to-immediately-pause-for-at-least-6-months/">echoes of these concerns recently</a> with calls to pause new AI developments for a six month period to ensure confidence in their implications. </p>
<hr>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><strong><em>This article is part of Conversation Insights</em></strong>
<br><em>The Insights team generates <a href="https://theconversation.com/uk/topics/insights-series-71218">long-form journalism</a> derived from interdisciplinary research. The team is working with academics from different backgrounds who have been engaged in projects aimed at tackling societal and scientific challenges.</em></p>
<hr>
<p>It would certainly be a mistake to dismiss the neural network as being solely about glossy, eye-catching new gadgets. They are already well established in our lives. Some are powerful in their practicality. As far back as 1989, a team led by Yann LeCun at AT&T Bell Laboratories used back-propagation techniques to train a system to <a href="https://www.ibm.com/topics/neural-networks">recognise handwritten postal codes</a>. The recent <a href="https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web/">announcement by Microsoft</a> that Bing searches will be powered by AI, making it your “copilot for the web”, illustrates how the things we discover and how we understand them will increasingly be a product of this type of automation.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1623070034804330497"}"></div></p>
<p>Drawing on vast data to find patterns AI can similarly be trained to do things like image recognition at speed – resulting in them being incorporated into <a href="https://patents.google.com/patent/US7295687B2/en">facial recognition</a>, for instance. This ability to identify patterns has led to many other applications, such as <a href="https://journalofbigdata.springeropen.com/articles/10.1186/s40537-020-00333-6">predicting stock markets</a>.</p>
<p>Neural networks are changing how we interpret and communicate too. Developed by the interestingly titled <a href="https://g.co/brain">Google Brain Team</a>, <a href="https://ai.googleblog.com/2016/09/a-neural-network-for-machine.html">Google Translate</a> is another prominent application of a neural network. </p>
<p>You wouldn’t want to play Chess or Shogi with one either. Their grasp of rules and their recall of strategies and all recorded moves means that they are exceptionally good at games (although ChatGPT seems to <a href="https://theconversation.com/chatgpt-struggles-with-wordle-puzzles-which-says-a-lot-about-how-it-works-201906">struggle with Wordle</a>). The systems that are troubling human Go players (Go is a notoriously tricky strategy board game) and Chess grandmasters, are <a href="https://www.deepmind.com/blog/alphazero-shedding-new-light-on-chess-shogi-and-go">made from neural networks</a>.</p>
<p>But their reach goes far beyond these instances and continues to expand. A search of patents restricted only to mentions of the exact phrase “neural networks” produces 135,828 results. With this rapid and ongoing expansion, the chances of us being able to fully explain the influence of AI may become ever thinner. These are the questions I have been examining in my research <a href="https://bristoluniversitypress.co.uk/the-tensions-of-algorithmic-thinking">and my new book on algorithmic thinking</a>.</p>
<h2>Mysterious layers of ‘unknowability’</h2>
<p>Looking back at the history of neural networks tells us something important about the automated decisions that define our present or those that will have a possibly more profound impact in the future. Their presence also tells us that we are likely to understand the decisions and impacts of AI even less over time. These systems are not simply black boxes, they are not just hidden bits of a system that can’t be seen or understood.</p>
<p>It is something different, something rooted in the aims and design of these systems themselves. There is a long-held pursuit of the unexplainable. The more opaque, the more authentic and advanced the system is thought to be. It is not just about the systems becoming more complex or the control of intellectual property limiting access (although these are part of it). It is instead to say that the ethos driving them has a particular and embedded interest in “unknowability”. The mystery is even coded into the very form and discourse of the neural network. They come with deeply piled layers – hence the phrase deep learning – and within those depths are the even more mysterious sounding “hidden layers”. The mysteries of these systems are deep below the surface.</p>
<p>There is a good chance that the greater the impact that artificial intelligence comes to have in our lives the less we will understand how or why. Today there is a strong push for AI that is explainable. We want to know how it works and how it arrives at decisions and outcomes. The EU is so concerned by the potentially “unacceptable risks” and even “dangerous” applications that it is currently advancing <a href="https://artificialintelligenceact.eu">a new AI Act</a> intended to set a “global standard” for “the development of secure, trustworthy and ethical artificial intelligence”.</p>
<p>Those new laws will be based on a need for explainability, <a href="https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF">demanding that</a> “for high-risk AI systems, the requirements of high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness, are strictly necessary to mitigate the risks to fundamental rights and safety posed by AI”. This is not just about things like self-driving cars (although systems that ensure safety fall into the EU’s category of high risk AI), it is also a worry that systems will emerge in the future that will have implications for human rights. </p>
<p>This is part of wider calls for transparency in AI so that its activities can be checked, audited and assessed. Another example would be the Royal Society’s <a href="https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policy-briefing.pdf">policy briefing on explainable AI</a> in which they point out that “policy debates across the world increasingly see calls for some form of AI explainability, as part of efforts to embed ethical principles into the design and deployment of AI-enabled systems”.</p>
<p>But the story of neural networks tells us that we are likely to get further away from that objective in the future, rather than closer to it.</p>
<h2>Inspired by the human brain</h2>
<p>These neural networks may be complex systems yet they have some core principles. Inspired by the human brain, they seek to copy or simulate forms of biological and human thinking. In terms of structure and design they are, as <a href="https://www.ibm.com/topics/neural-networks">IBM also explains</a>, comprised of “node layers, containing an input layer, one or more hidden layers, and an output layer”. Within this, “each node, or artificial neuron, connects to another”. Because they require inputs and information to create outputs they “rely on training data to learn and improve their accuracy over time”. These technical details matter but so too does the wish to model these systems on the complexities of the human brain.</p>
<p>Grasping the ambition behind these systems is vital in understanding what these technical details have come to mean in practice. In a <a href="https://mitpress.mit.edu/9780262511117/talking-nets/">1993 interview</a>, the neural network scientist Teuvo Kohonen concluded that a “self-organising” system “is my dream”, operating “something like what our nervous system is doing instinctively”. As an example, Kohonen pictured how a “self-organising” system, a system that monitored and managed itself, “could be used as a monitoring panel for any machine … in every airplane, jet plane, or every nuclear power station, or every car”. This, he thought, would mean that in the future “you could see immediately what condition the system is in”. </p>
<figure class="align-center ">
<img alt="A group of male scientists around an old computer." src="https://images.theconversation.com/files/516910/original/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/516910/original/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=446&fit=crop&dpr=1 600w, https://images.theconversation.com/files/516910/original/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=446&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/516910/original/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=446&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/516910/original/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=561&fit=crop&dpr=1 754w, https://images.theconversation.com/files/516910/original/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=561&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/516910/original/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=561&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Early computing often involved a large apparatus of assembled parts.</span>
<span class="attribution"><span class="source">Aalto University Archives</span></span>
</figcaption>
</figure>
<p>The overarching objective was to have a system capable of adapting to its surroundings. It would be instant and autonomous, operating in the style of the nervous system. That was the dream, to have systems that could handle themselves without the need for much human intervention. The complexities and unknowns of the brain, the nervous system and the real world would soon come to inform the development and design of neural networks.</p>
<h2>‘Something fishy about it’</h2>
<p>But jumping back to 1956 and that strange learning machine, it was the hands-on approach that Taylor had taken when building it that immediately caught Cowan’s attention. He had clearly sweated over the assembly of the bits and pieces. Taylor, <a href="https://mitpress.mit.edu/9780262511117/talking-nets/">Cowan observed</a> during an interview on his own part in the story of these systems, “didn’t do it by theory, and he didn’t do it on a computer”. Instead, with tools in hand, he “actually built the hardware”. It was a material thing, a combination of parts, perhaps even a contraption. And it was “all done with analogue circuitry” taking Taylor, Cowan notes, “several years to build it and to play with it”. A case of trial and error.</p>
<p>Understandably Cowan wanted to get to grips with what he was seeing. He tried to get Taylor to explain this learning machine to him. The clarifications didn’t come. Cowan couldn’t get Taylor to describe to him how the thing worked. The analogue neurons remained a mystery. The more surprising problem, Cowan thought, was that Taylor “didn’t really understand himself what was going on”. This wasn’t just a momentary breakdown in communication between the two scientists with different specialisms, it was more than that.</p>
<p>In an <a href="https://mitpress.mit.edu/9780262511117/talking-nets/">interview from the mid-1990s</a>, thinking back to Taylor’s machine, Cowan revealed that “to this day in published papers you can’t quite understand how it works”. This conclusion is suggestive of how the unknown is deeply embedded in neural networks. The unexplainability of these neural systems has been present even from the fundamental and developmental stages dating back nearly seven decades. </p>
<p>This mystery remains today and is to be found within advancing forms of AI. The unfathomability of the functioning of the associations made by Taylor’s machine led Cowan to wonder if there was “something fishy about it”.</p>
<h2>Long and tangled roots</h2>
<p>Cowan referred back to his brief visit with Taylor when asked about the reception of his own work some years later. Into the 1960s people were, Cowan reflected, “a little slow to see the point of an analogue neural network”. This was despite, Cowan recalls, Taylor’s 1950s work on “associative memory” being based on “analog neurons”. The Nobel Prize-winning neural systems expert, <a href="https://mitpress.mit.edu/9780262511117/talking-nets/">Leon N. Cooper, concluded</a> that developments around the application of the brain model in the 1960s, were regarded “as among the deep mysteries”. Because of this uncertainty there remained a scepticism about what a neural network might achieve. But things slowly began to change.</p>
<p>Some 30 years ago the neuroscientist Walter J. Freeman, who was surprised by the “<a href="https://mitpress.mit.edu/9780262511117/talking-nets/">remarkable</a>” range of applications that had been found for neural networks, was already commenting on the fact that he didn’t see them as “a fundamentally new kind of machine”. They were a slow burn, with the technology coming first and then subsequent applications being found for it. This took time. Indeed, to find the roots of neural network technology we might head back even further than Cowan’s visit to Taylor’s mysterious machine. </p>
<p>The neural net scientist James Anderson and the science journalist Edward Rosenfeld <a href="https://mitpress.mit.edu/9780262511117/talking-nets/">have noted</a> that the background to neural networks goes back into the 1940s and some early attempts to, as they describe, “understand the human nervous systems and to build artificial systems that act the way we do, at least a little bit”. And so, in the 1940s, the mysteries of the human nervous system also became the mysteries of computational thinking and artificial intelligence.</p>
<p>Summarising this long story, the computer science writer <a href="https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414">Larry Hardesty has pointed out</a> that deep learning in the form of neural networks “have been going in and out of fashion for more than 70 years”. More specifically, he adds, these “neural networks were first proposed in 1944 by Warren McCulloch and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department”.</p>
<figure class="align-right ">
<img alt="Black and white image of two men" src="https://images.theconversation.com/files/516916/original/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/516916/original/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=572&fit=crop&dpr=1 600w, https://images.theconversation.com/files/516916/original/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=572&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/516916/original/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=572&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/516916/original/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=718&fit=crop&dpr=1 754w, https://images.theconversation.com/files/516916/original/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=718&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/516916/original/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=718&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The inventors of the neural network Walter Pitts and Warren McCulloch pictured here in 1949.</span>
<span class="attribution"><a class="source" href="https://www.semanticscholar.org/paper/On-the-legacy-of-W.S.-McCulloch-Moreno-D%C3%ADaz-Moreno-D%C3%ADaz/8056242a82ecc5e0064d4ff187fb07c5853fe8a6">Semantic Scholar</a></span>
</figcaption>
</figure>
<p>Elsewhere, <a href="https://www.historyofinformation.com/detail.php?entryid=782">1943</a> is sometimes the given date as the first year for the technology. Either way, for roughly 70 years accounts suggest that neural networks have moved in and out of vogue, often neglected but then sometimes taking hold and moving into more mainstream applications and debates. The uncertainty persisted. Those early developers frequently describe the importance of their research as being overlooked, until it found its purpose often years and sometimes decades later.</p>
<p>Moving from the 1960s into the late 1970s we can find further stories of the unknown properties of these systems. Even then, after three decades, the neural network was still to find a sense of purpose. David Rumelhart, who had a background in psychology and was a co-author of a set of books published in 1986 that would later drive attention back again towards neural networks, found himself collaborating on the development of neural networks <a href="https://mitpress.mit.edu/9780262511117/talking-nets/">with his colleague Jay McClelland</a>. </p>
<p>As well as being colleagues they had also recently encountered each other at a conference in Minnesota where Rumelhart’s talk on “story understanding” had provoked some discussion among the delegates.</p>
<p>Following that conference McClelland returned with a thought about how to develop a neural network that might combine models to be more interactive. What matters here is <a href="https://mitpress.mit.edu/9780262511117/talking-nets/">Rumelhart’s recollection</a> of the “hours and hours and hours of tinkering on the computer”.</p>
<blockquote>
<p>We sat down and did all this in the computer and built these computer models, and we just didn’t understand them. We didn’t understand why they worked or why they didn’t work or what was critical about them.</p>
</blockquote>
<p>Like Taylor, Rumelhart found himself tinkering with the system. They too created a functioning neural network and, crucially, they also weren’t sure how or why it worked in the way that it did, seemingly learning from data and finding associations.</p>
<h2>Mimicking the brain - layer after layer</h2>
<p>You may already have noticed that when discussing the origins of neural networks the image of the brain and the complexity this evokes are never far away. The human brain acted as a sort of template for these systems. In the early stages, in particular, the brain – still one of the great unknowns – became a model for how the neural network might function. </p>
<figure class="align-center ">
<img alt="Design concept of layers in the brain." src="https://images.theconversation.com/files/516924/original/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/516924/original/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/516924/original/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/516924/original/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/516924/original/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/516924/original/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/516924/original/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The model of the brain became a model for the layering within artificial neural networks.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/brain-paper-cut-style-layers-art-1303430377">Shutterstock/CYB3RUSS</a></span>
</figcaption>
</figure>
<p>So these experimental new systems were modelled on something whose functioning was itself largely unknown. The neurocomputing engineer Carver Mead <a href="https://mitpress.mit.edu/9780262511117/talking-nets/">has spoken revealingly</a> of the conception of a “cognitive iceberg” that he had found particularly appealing. It is only the tip of the iceberg of consciousness of which we are aware and which is visible. The scale and form of the rest remains unknown below the surface.</p>
<p>In 1998, <a href="https://mitpress.mit.edu/9780262511117/talking-nets/">James Anderson</a>, who had been working for some time on neural networks, noted that when it came to research on the brain “our major discovery seems to be an awareness that we really don’t know what is going on”.</p>
<p>In a detailed account in the <a href="https://www.ft.com/content/bcd81a88-cadb-11e8-b276-b9069bde0956">Financial Times in 2018</a>, technology journalist Richard Waters noted how neural networks “are modelled on a theory about how the human brain operates, passing data through layers of artificial neurons until an identifiable pattern emerges”. This creates a knock-on problem, Waters proposed, as “unlike the logic circuits employed in a traditional software program, there is no way of tracking this process to identify exactly why a computer comes up with a particular answer”. Waters’ conclusion is that these outcomes cannot be unpicked. The application of this type of model of the brain, taking the data through many layers, means that the answer cannot readily be retraced. The multiple layering is a good part of the reason for this.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-celebrated-ai-has-learned-a-new-trick-how-to-do-chemistry-182031">A celebrated AI has learned a new trick: How to do chemistry</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414">Hardesty</a> also observed these systems are “modelled loosely on the human brain”. This brings an eagerness to build in ever more processing complexity in order to try to match up with the brain. The result of this aim is a neural net that “consists of thousands or even millions of simple processing nodes that are densely interconnected”. Data moves through these nodes in only one direction. Hardesty observed that an “individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data”.</p>
<p>Models of the human brain were a part of how these neural networks were conceived and designed from the outset. This is particularly interesting when we consider that the brain was itself a mystery of the time (and in many ways still is). </p>
<h2>‘Adaptation is the whole game’</h2>
<p>Scientists like Mead and Kohonen wanted to create a system that could genuinely adapt to the world in which it found itself. It would respond to its conditions. Mead was clear that the value in neural networks was that they could facilitate this type of adaptation. At the time, and reflecting on this ambition, <a href="https://mitpress.mit.edu/9780262511117/talking-nets/">Mead added</a> that producing adaptation “is the whole game”. This adaptation is needed, he thought, “because of the nature of the real world”, which he concluded is “too variable to do anything absolute”.</p>
<p>This problem needed to be reckoned with especially as, he thought, this was something “the nervous system figured out a long time ago”. Not only were these innovators working with an image of the brain and its unknowns, they were combining this with a vision of the “real world” and the uncertainties, unknowns and variability that this brings. The systems, Mead thought, needed to be able to respond and adapt to circumstances <em>without</em> instruction.</p>
<p>Around the same time in the 1990s, Stephen Grossberg – an expert in cognitive systems working across maths, psychology and bioemedical engineering – <a href="https://mitpress.mit.edu/9780262511117/talking-nets/">also argued that</a> adaptation was going to be the important step in the longer term. Grossberg, as he worked away on neural network modelling, thought to himself that it is all “about how biological measurement and control systems are designed to adapt quickly and stably in real time to a rapidly fluctuating world”. As we saw earlier with Kohonen’s “dream” of a “self-organising” system, a notion of the “real world” becomes the context in which response and adaptation are being coded into these systems. How that real world is understood and imagined undoubtedly shapes how these systems are designed to adapt.</p>
<h2>Hidden layers</h2>
<p>As the layers multiplied, deep learning plumbed new depths. The neural network is trained using training data that, <a href="https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414">Hardesty explained</a>, “is fed to the bottom layer – the input layer – and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer”. The more layers, the greater the transformation and the greater the distance from input to output. The development of Graphics Processing Units (GPUs), in gaming for instance, Hardesty added, “enabled the one-layer networks of the 1960s and the two to three- layer networks of the 1980s to blossom into the ten, 15, or even 50-layer networks of today”. </p>
<p>Neural networks are getting deeper. Indeed, it’s this adding of layers, according to Hardesty, that is “what the ‘deep’ in ‘deep learning’ refers to”. This matters, he proposes, because “currently, deep learning is responsible for the best-performing systems in almost every area of artificial intelligence research”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-and-the-future-of-work-5-experts-on-what-chatgpt-dall-e-and-other-ai-tools-mean-for-artists-and-knowledge-workers-196783">AI and the future of work: 5 experts on what ChatGPT, DALL-E and other AI tools mean for artists and knowledge workers</a>
</strong>
</em>
</p>
<hr>
<p>But the mystery gets deeper still. As the layers of neural networks have piled higher their complexity has grown. It has also led to the growth in what are referred to as “hidden layers” within these depths. The discussion of the optimum number of hidden layers in a neural network is ongoing. The media theorist <a href="https://journals.sagepub.com/doi/pdf/10.1177/0263276420966386">Beatrice Fazi has written</a> that “because of how a deep neural network operates, relying on hidden neural layers sandwiched between the first layer of neurons (the input layer) and the last layer (the output layer), deep-learning techniques are often opaque or illegible even to the programmers that originally set them up”. </p>
<p>As the layers increase (including those hidden layers) they become even less explainable – even, as it turns out, again, to those creating them. Making a similar point, the prominent and interdisciplinary new media thinker Katherine Hayles <a href="https://journals.sagepub.com/doi/pdf/10.1177/0263276419829539">also noted</a> that there are limits to “how much we can know about the system, a result relevant to the ‘hidden layer’ in neural net and deep learning algorithms”.</p>
<h2>Pursuing the unexplainable</h2>
<p>Taken together, these long developments are part of what the sociologist of technology <a href="https://global.oup.com/academic/product/ifthen-9780190493035?cc=gb&lang=en&">Taina Bucher</a> has called the “problematic of the unknown”. Expanding his influential research on scientific knowledge into the field of AI, Harry Collins <a href="https://www.wiley.com/en-sg/Artifictional+Intelligence:+Against+Humanity's+Surrender+to+Computers-p-9781509504152">has pointed out that</a> the objective with neural nets is that they may be produced by a human, initially at least, but “once written the program lives its own life, as it were; without huge effort, exactly how the program is working can remain mysterious”. This has echoes of those long-held dreams of a self-organising system. </p>
<p>I’d add to this that the unknown and maybe even the unknowable have been pursued as a fundamental part of these systems from their earliest stages. There is a good chance that the greater the impact that artificial intelligence comes to have in our lives the less we will understand how or why.</p>
<p>But that doesn’t sit well with many today. We want to know how AI works and how it arrives at the decisions and outcomes that impact us. As developments in AI continue to shape our knowledge and understanding of the world, what we discover, how we are treated, how we learn, consume and interact, this impulse to understand will grow. When it comes to explainable and transparent AI, the story of neural networks tells us that we are likely to get further away from that objective in the future, rather than closer to it.</p>
<hr>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=112&fit=crop&dpr=1 600w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=112&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=112&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=140&fit=crop&dpr=1 754w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=140&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=140&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em>For you: more from our <a href="https://theconversation.com/uk/topics/insights-series-71218?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">Insights series</a>:</em></p>
<ul>
<li><p><em><a href="https://theconversation.com/the-artist-formerly-known-as-camille-princes-lost-album-comes-out-189486">The artist formerly known as Camille – Prince’s lost album ‘comes out’</a></em></p></li>
<li><p><em><a href="https://theconversation.com/its-like-being-in-a-warzone-aande-nurses-open-up-about-the-emotional-cost-of-working-on-the-nhs-frontline-194197">‘It’s like being in a warzone’ – A&E nurses open up about the emotional cost of working on the NHS frontline</a></em></p></li>
<li><p><em><a href="https://theconversation.com/living-with-mnd-how-a-form-of-acceptance-therapy-is-helping-me-make-one-difficult-choice-at-a-time-184973">Living with MND: how a form of ‘acceptance therapy’ is helping me make one difficult choice at a time</a></em></p></li>
</ul>
<p><em>To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. <a href="https://theconversation.com/uk/newsletters/the-daily-newsletter-2?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK"><strong>Subscribe to our newsletter</strong></a>.</em></p><img src="https://counter.theconversation.com/content/199456/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Beer’s book The Tensions of Algorithmic Thinking has recently been published by Bristol University Press.</span></em></p>Many of the pioneers who began developing artificial neural networks weren’t sure how they actually worked - and we’re no more certain today.David Beer, Professor of Sociology, University of YorkLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2001452023-03-20T17:26:21Z2023-03-20T17:26:21ZInteractive cinema: how films could alter plotlines in real time by responding to viewers’ emotions<figure><img src="https://images.theconversation.com/files/514508/original/file-20230309-24-nzv160.JPG?ixlib=rb-1.1.0&rect=36%2C144%2C5969%2C3760&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A sentinel droid features in the film Before We Disappear.</span> <span class="attribution"><span class="source">AlbinoMosquito Productions Ltd</span>, <span class="license">Author provided</span></span></figcaption></figure><p>Most films offer exactly the same viewing experience. You sit down, the film starts, the plot unfolds and you follow what’s happening on screen until the story concludes. It’s a linear experience. My new film, <a href="http://www.albinomosquito.com/before-we-disappear/">Before We Disappear</a> – about a pair of climate activists who seek revenge on corporate perpetrators of global warming – seeks to alter that viewing experience.</p>
<p>What makes my film different is that it adapts the story to fit the viewer’s emotional response. Through the use of a computer camera and software, the film effectively watches the audience as they view footage of climate disasters. Viewers are implicitly asked to choose a side. </p>
<p>I chose to use this technology to make a film about the climate crisis to get people to really think about what they are willing to sacrifice for a survivable future. </p>
<p>Storytelling has always been interactive: traditional oral storytellers would interact and respond to their listeners. For almost a century, film directors have been <a href="https://en.wikipedia.org/wiki/Interactive_cinema">experimenting with interactivity</a> – the past decade has seen an explosion of interactive content.</p>
<p>Streaming services give viewers the opportunity to choose their own adventure. However, letting the viewer control the action has long posed a challenge: it’s at odds with narrative immersion, where the viewer is drawn into the world created by the story.</p>
<p>One of the most prominent recent experiments in interactive film, <a href="https://en.wikipedia.org/wiki/Black_Mirror:_Bandersnatch">Netflix’s Bandersnatch</a>, clearly illustrates this. Here the action stops to ask the user what to do next – breaking the flow of the story and actively involving the viewer. Solving this issue of breaking the immersive experience remains a key question for artists exploring interactive film.</p>
<figure class="align-center ">
<img alt="Still from Before We Disappear" src="https://images.theconversation.com/files/514458/original/file-20230309-1353-ri9b9d.png?ixlib=rb-1.1.0&rect=10%2C0%2C3409%2C1426&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/514458/original/file-20230309-1353-ri9b9d.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=251&fit=crop&dpr=1 600w, https://images.theconversation.com/files/514458/original/file-20230309-1353-ri9b9d.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=251&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/514458/original/file-20230309-1353-ri9b9d.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=251&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/514458/original/file-20230309-1353-ri9b9d.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=316&fit=crop&dpr=1 754w, https://images.theconversation.com/files/514458/original/file-20230309-1353-ri9b9d.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=316&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/514458/original/file-20230309-1353-ri9b9d.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=316&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Before We Disappear uses emotional cues from the viewer to edit the film in real time.</span>
<span class="attribution"><span class="source">AlbinoMosquito Productions Ltd</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>The films I create and direct take a different route, leveraging non-conscious control to influence a film as the audience watches. My previous <a href="http://braincontrolledmovie.co.uk/">brain-controlled</a> films, <a href="https://www.imdb.com/title/tt7853742/">The Moment (2018)</a> and <a href="https://www.imdb.com/title/tt8072006/">The Disadvantages of Time Travel (2014)</a>, used brain computer interfaces (BCIs). These systems use computers to <a href="https://nottingham-repository.worktribe.com/output/980302/scanners-exploring-the-control-of-adaptive-films-using-brain-computer-interaction">analyse electrical signals from the brain</a>, allowing people to effectively control a device with their minds.</p>
<p>Using this data from the brain, audiences <a href="https://nottingham-repository.worktribe.com/index.php/output/1468705/from-directors-cut-to-users-cut-to-watch-a-brain-controlled-film-is-to-edit-it">create a non-conscious edit</a> of the film in real time – reinforcing the films’ respective stories of science-fiction dystopia and a wandering, daydreaming mind. </p>
<p>However, the BCI interface requires specialised equipment. For Before We Disappear, I wanted to use a technology more readily available to audiences, that could allow films to be shared over the internet.</p>
<h2>Controlling the narrative</h2>
<p>Before We Disappear uses an ordinary computer camera to read emotional cues and instruct the real-time edit of the film. To make this work, we needed a good understanding of how people react to films. </p>
<p>We ran several <a href="https://dl.acm.org/doi/10.1145/3290607.3312814">studies</a> <a href="https://dl.acm.org/doi/10.1145/3290605.3300378">exploring the emotions</a> filmmakers intend to evoke and how viewers visually present emotion when watching. By using computer vision and machine learning techniques from our partner <a href="https://www.blueskeye.com/">BlueSkeye AI</a>, we analysed viewers’ facial emotions and reactions to film clips and developed several algorithms to leverage that data to control a narrative.</p>
<figure class="align-center ">
<img alt="Behind the scenes of filming Before We Disappear" src="https://images.theconversation.com/files/514714/original/file-20230310-16-8iwpo.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/514714/original/file-20230310-16-8iwpo.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/514714/original/file-20230310-16-8iwpo.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/514714/original/file-20230310-16-8iwpo.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/514714/original/file-20230310-16-8iwpo.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/514714/original/file-20230310-16-8iwpo.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/514714/original/file-20230310-16-8iwpo.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Behind the scenes of filming Before We Disappear.</span>
<span class="attribution"><span class="source">AlbinoMosquito Productions Ltd</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>While we observed that audiences tend not to extensively emote when watching a film, BlueSkeye’s face and emotion analysis tools are sensitive enough to pick up enough small variations and emotional cues to adapt the film to viewer reactions. </p>
<p>The analysis software measures facial muscle movement along with the strength of emotional arousal – essentially how emotional a viewer feels in a particular moment. The software also evaluates the positivity or negativity of the emotion – something we call “<a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00261/full">valence</a>”. </p>
<p>We are experimenting with various algorithms where this arousal and valence data contributes to real-time edit decisions, which causes the story to reconfigure itself. The first scene acts as a baseline, which the next scene is measured against. Depending on the response, the narrative will become one of around 500 possible edits. In Before We Disappear, I use a non-linear narrative which offers the audience different endings and emotional journeys. </p>
<h2>Emotional journey</h2>
<p>I see interactive technology as a way of expanding the filmmaker’s toolkit, to further tell a story and allow the film to adapt to an individual viewer, challenging and distributing the power of the director. </p>
<p>However, emotional responses could be misused or have unforeseen consequences. It is not hard to imagine an online system showing only content eliciting positive emotions from the user. This could be used to create an echo chamber – where people only see content that matches the preferences they already have. </p>
<p>Or it could be used for propaganda. We saw in the Cambridge Analytica scandal how <a href="https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Analytica_data_scandal">large amounts of personal information</a> were collected from Facebook and used for political advertising. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/514727/original/file-20230310-22-fdoxib.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/514727/original/file-20230310-22-fdoxib.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=251&fit=crop&dpr=1 600w, https://images.theconversation.com/files/514727/original/file-20230310-22-fdoxib.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=251&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/514727/original/file-20230310-22-fdoxib.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=251&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/514727/original/file-20230310-22-fdoxib.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=316&fit=crop&dpr=1 754w, https://images.theconversation.com/files/514727/original/file-20230310-22-fdoxib.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=316&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/514727/original/file-20230310-22-fdoxib.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=316&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The new film explores solutions to the climate crisis.</span>
<span class="attribution"><span class="source">AlbinoMosquito Productions Ltd</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>Our <a href="https://www.researchgate.net/publication/348325526_Brain-controlled_cinematic_interactions">research</a> aims to generate conversation about how users’ emotion data can be used responsibly with informed consent, while allowing users to control their own personal information. In our system, the data is analysed on the users’ device, rather than, say, the cloud.</p>
<h2>Big business, big responsibility</h2>
<p>Non-conscious interaction is big business. Platforms such as <a href="https://www.nbcnews.com/tech/social-media/algorithms-take-over-youtube-s-recommendations-highlight-human-problem-n867596">TikTok</a> and <a href="https://www.nbcnews.com/tech/social-media/algorithms-take-over-youtube-s-recommendations-highlight-human-problem-n867596">YouTube</a> use analysis of users’ past interactions on the platforms to influence the new content they see there. Users are not always aware of what personal information is being created or stored, nor can they influence what algorithms will present to them next. </p>
<p>It’s important to create a system where audiences’ data is not stored. Video of the viewer or facial expression data should not be uploaded or analysed anywhere but on the player device. We plan to release the film as an interactive app, incorporating an awareness of potential abuse of the user’s data, and safeguarding any personal data on the device used to watch it.</p>
<p>Adaptive films offer an alternative to traditional “choose-your-own-adventure” storytelling. When the story can change based on the audiences’ unconscious responses rather than intentional interaction, their focus can be kept in the story.</p>
<p>This means they can enjoy a more personalised experience of the film. Turns out the old traditions of storytelling may still have much to teach us in the 21st century.</p><img src="https://counter.theconversation.com/content/200145/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Richard Ramchurn is a director of AlbinoMosquito Productions Ltd, and also receives funding from the Arts Council England and EPSRC. </span></em></p>A new film changes its story based on the emotions expressed by the viewer.Richard Ramchurn, Assistant researcher, University of NottinghamLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1980622023-02-22T12:52:25Z2023-02-22T12:52:25ZChatGPT could be an effective and affordable tutor<figure><img src="https://images.theconversation.com/files/511462/original/file-20230221-14-g9iayw.jpg?ixlib=rb-1.1.0&rect=32%2C80%2C5345%2C3499&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Educational software has a long history, but chatbots could help students excel like never before.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/boy-using-smartphone-typing-chatting-conversation-royalty-free-image/1429924369?phrase=chatbot&adppopup=true">Fabio Principe / EyeEm via Getty Images</a></span></figcaption></figure><p>Imagine a private tutor that never gets tired, has access to massive amounts of data and is free for everyone. In 1966, Stanford philosophy professor Patrick Suppes did just that when he <a href="https://www.scientificamerican.com/article/the-uses-of-computers-in-education/">made this prediction</a>: One day, computer technology would evolve so that “millions of schoolchildren” would have access to a personal tutor. He said the conditions would be just like the <a href="https://www.worldhistory.org/aristotle/">young prince Alexander the Great being tutored by Aristotle</a>.</p>
<p>Now, <a href="https://openai.com/blog/chatgpt/">ChatGPT</a>, a new artificial intelligence-powered chatbot with advanced conversational abilities, may have the capability to become such a tutor. ChatGPT has collected huge amounts of data on a wide range of topics and can <a href="https://mackinstitute.wharton.upenn.edu/2023/would-chat-gpt3-get-a-wharton-mba-new-white-paper-by-christian-terwiesch/">pass graduate school exams</a>. As a researcher who studies <a href="https://scholar.google.com/citations?user=yRJ7ARQAAAAJ&hl=en&oi=ao">how computers can be used to help people learn</a>, I think ChatGPT can be used to help students excel academically. However, in its current form, ChatGPT shows an inability <a href="https://www.nytimes.com/2023/02/16/technology/chatbots-explained.html">to stay focused on one particular task</a>, let alone tutoring.</p>
<p>Philosophy, engineering and artificial intelligence scholars envisioned using the computer as an <a href="https://doi.org/10.1007/BF00168958">“intelligent tutor”</a> well before the internet <a href="https://www.vox.com/2014/6/16/18076282/the-internet">became a global commercial network in the 1990s</a>. I believe lessons from developing those early tutoring systems can offer insight into how students and educators can best make use of ChatGPT as a tutor in the future.</p>
<h2>Computers as tutors</h2>
<p>Suppes – the Stanford philosophy professor – was a pioneer of a field called “<a href="https://www.nytimes.com/2014/12/03/us/patrick-suppes-pioneer-in-computerized-learning-dies-at-92.html">computer-assisted instruction</a>.” He developed some of the earliest educational software. That software <a href="https://doi.org/10.1126/science.166.3903.343">provided individual instruction via computer and led students to have better test results</a> than those who didn’t use the program. I worked for Suppes in developing software and other online programs from 2004 to 2012. </p>
<p>Since then, <a href="https://doi.org/10.3758/s13428-012-0258-1">experiments in building “intelligent tutors” to help students have driven advances in artificial intelligence</a>, social networks and computer hardware. And today, the abilities of ChatGPT to write essays, answer philosophical questions and solve computer coding problems may finally achieve Suppes’ goal of truly personalized tutoring via computer.</p>
<h2>Early versions of personalized learning</h2>
<p>In 1972, a new personalized learning system called <a href="https://physics.illinois.edu/people/history/PLATO">PLATO</a>, for Programmed Logic for Automated Teaching Operations, made its debut. It was the <a href="https://umsi580.lsait.lsa.umich.edu/s/PLATOs-Citizens/page/welcome">first widely available personalized learning system of its kind</a>. </p>
<p>Created by Don Bitzer, a professor of electrical engineering at the University of Illinois, PLATO allowed up to 1,000 students to be logged onto a mainframe computer simultaneously. Each student could complete different online courses in foreign languages, music, math and many other subjects while receiving feedback from the computer on their work. </p>
<p>PLATO <a href="https://ieeexplore.ieee.org/abstract/document/1450224?casa_token=Ku_RcPxOG2cAAAAA:90f2HPbsA22yxhwS8MiY5MoCe0n_xMMboNneA6YxmrOdGGd2JRJ19WF8Xq6eq_4oFhmTPOw">enabled students to reach the same level of achievement as in-person classes</a> in less time. And most students preferred this mode of instruction over sitting in a large lecture class. Yet, <a href="https://www.learntechlib.org/p/169366/">the system was too expensive</a> to be used by many colleges and universities. Each computer terminal was marketed at over US$8,000 – about $58,000 today – and schools were charged additional fees every time a student used the system. Still, PLATO’s success with students inspired a number of companies to create software that provided a similar kind of tutoring, including the College Curriculum Corporation, which was co-founded by Suppes.</p>
<p>Popular personal computer brands, such as Apple and Commodore, <a href="https://www.computerhistory.org/brochures/thm-42b97f98dbaf2/">advertised the availability of educational software</a> as a reason for families to invest in a home computer. </p>
<p>By 1985, researchers at Carnegie Mellon University were designing software using advances in artificial intelligence and cognitive psychology. They claimed that the current technology had advanced to a level that enabled computer systems to be designed to <a href="https://doi.org/10.1126/science.228.4698.456">serve as effective as human tutors</a>. However, even though there were over 10,000 pieces of educational software available at the time, much of it was of fairly low quality and did not provide real tutoring.</p>
<p>Although the more advanced designs of the educational software developed at Carnegie Mellon enabled students to learn significantly more than students in traditional classrooms, they were not widely used in schools. </p>
<p>In the 1980s and 1990s, a school would need a sizable number of expensive, high-powered computer workstations for students to use an intelligent tutor. Today, the computers are much more powerful and much less expensive. </p>
<p>And early intelligent tutors were used primarily in math, science and technology courses, which limited their appeal. Students could not ask the software programs a question. However, students could get feedback on answers to quizzes and tests.</p>
<p>In 2001, the next generation of intelligent tutoring systems was able to have <a href="https://ojs.aaai.org/index.php/aimagazine/article/view/1591">conversations with students in written English</a>. These systems, early chatbots, used advances in <a href="https://doi.org/10.1016/j.knosys.2006.11.014">natural language processing</a> to have dialogues with students about a range of subjects. Each system was designed specifically for one subject, such as physics. Adding another subject, like English, would require its own specialized tutoring system. </p>
<p>The next few years brought three developments, which enabled a forward leap in achieving the dream of computerized tutoring. One was the availability of broadband internet, which was faster than the dial-up connections of the 1990s. The second was the lower cost of more powerful computers. The third was investment by technology companies and government funding for university research in artificial intelligence, which led to the development of what we now call chatbots.</p>
<h2>The first chatbots</h2>
<p>By 2007, early AI chatbots provided tutoring by talking students through their answers to questions. Research showed these chatbots <a href="https://www.andrew.cmu.edu/user/nbradley/afrl/PAPERS/Tutorial%20Dialogue%20as%20Adaptive%20Collaborative%20Learning%20Support.pdf">supported learning outcomes similar to the effects of human tutors.</a>. Even so, using chatbot technology to provide tutoring was primarily experimental.</p>
<p>During the 2010s, social networks and advances in artificial intelligence dramatically changed the way people interact with technology. Chatbots are integrated into many parts of our lives and are no longer experimental. People now talk to their phones, to their cars and their home appliances. Many people have virtual assistants at home. They can ask Alexa, Siri or Google for directions or the answers to simple homework questions.</p>
<p>Today, ChatGPT has advanced abilities that enable more complex conversations and can provide individualized tutoring much like what Suppes predicted in 1966.</p>
<p>I asked ChatGPPT why it would be a good tutor, and it responded: “I am a good tutor for a student because I am able to understand and respond to a wide range of questions and prompts, and I can provide detailed and accurate information on a wide range of subjects. Additionally, I am able to communicate this information in a clear and easy-to-understand manner, making it accessible for students of all levels of understanding. Because I am a machine-learning model, I can also provide a large amount of information quickly and efficiently, which can be especially helpful for students who are trying to learn a lot of new material in a short amount of time.”</p>
<p>By contrast, Siri said, “Hmm … I don’t have an answer for that. Is there something else I can help you with?”</p>
<p>Recent research in using chatbots in college courses shows that how chatbots are used is important in encouraging students to learn more deeply. For instance, AI chatbots enabled significant improvements in learning in a graduate <a href="https://doi.org/10.1007/978-3-030-52240-7_50">course on cloud computing at Carnegie Mellon</a>. These learning gains occurred when these chatbots asked students to build on an existing argument or to provide more information about a claim they had made. In this case, the chatbot asked the student a question, rather than vice versa.</p>
<p>Many educators are <a href="https://www.washingtonpost.com/education/2022/12/28/chatbot-cheating-ai-chatbotgpt-teachers/">worried about students learning less with ChatGPT</a> since it can be used to cheat on assignments and papers. Others are worried about <a href="https://www.cnet.com/tech/computing/why-were-all-obsessed-with-the-mind-blowing-chatgpt-ai-chatbot/">ChatGPT giving wrong answers</a> or spreading misinformation.</p>
<p>Yet the history and research of intelligent tutors show that <a href="https://www.emerald.com/insight/content/doi/10.1108/ILS-04-2020-0095/full/html?casa_token=y4BFZ01alMkAAAAA:p8SqvBkycw2D2dgsSPcSihWHryBxHvcJrhMLM-s9ilij2UWXRL4fA_6ppd9bw4h5xxLLCOXC5y7LgSq7JTKZ_OM4C9jMXf8oxzIpsKgjBQM0OoJG">using the right design</a> to harness the power of chatbots like ChatGPT can make deeper, individualized learning available to almost anyone. For example, if people use ChatGPT to ask students questions that prompt them to revise or explain their work, <a href="https://doi.org/10.1007/978-3-030-52240-7_50">students will have better learning gains</a>. Since ChatGPT has access to far more knowledge than Aristotle ever did, it has great potential for providing tutoring to students to help them learn more than they would otherwise.</p><img src="https://counter.theconversation.com/content/198062/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>I worked for Pat Suppes at Stanford from 2004-2012, and am a research colleague of Carolyn Rose and Sreecharan Sankaranaran, who are co-authors of some research cited in this piece.</span></em></p>ChatGPT could lead to substantial learning gains if it’s used as a tutor, an online learning specialist says.Anne Trumbore, Chief Digital Learning Officer, Sands Institute for Lifelong Learning, University of VirginiaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1919302023-01-30T13:12:46Z2023-01-30T13:12:46ZLimits to computing: A computer scientist explains why even in the age of AI, some problems are just too difficult<figure><img src="https://images.theconversation.com/files/506497/original/file-20230125-24-e7inac.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5700%2C3788&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Computers are growing more powerful and more capable, but everything has limits.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/futuristic-semiconductor-and-circuit-board-royalty-free-image/1366897838">Yuichiro Chino/Moment via Getty Images</a></span></figcaption></figure><p>Empowered by artificial intelligence technologies, computers today can <a href="https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-chatbot-messages/672411/">engage in convincing conversations</a> with people, <a href="https://www.nbcnews.com/mach/science/ai-can-now-compose-pop-music-even-symphonies-here-s-ncna1010931">compose songs</a>, <a href="https://www.nytimes.com/2022/04/06/technology/openai-images-dall-e.html">paint paintings</a>, play <a href="https://www.wired.com/story/alphabets-latest-ai-show-pony-has-more-than-one-trick/">chess and go</a>, and <a href="https://doi.org/10.1007/s12652-021-03612-z">diagnose diseases</a>, to name just a few examples of their technological prowess. </p>
<p>These successes could be taken to indicate that computation has no limits. To see if that’s the case, it’s important to understand what makes a computer powerful. </p>
<p>There are two aspects to a computer’s power: the number of operations its hardware can execute per second and the efficiency of the algorithms it runs. The hardware speed is limited by the laws of physics. Algorithms – basically <a href="https://theconversation.com/what-is-an-algorithm-how-computers-know-what-to-do-with-data-146665">sets of instructions</a> – are written by humans and translated into a sequence of operations that computer hardware can execute. Even if a computer’s speed could reach the physical limit, computational hurdles remain due to the limits of algorithms.</p>
<p>These hurdles include problems that are impossible for computers to solve and problems that are theoretically solvable but in practice are beyond the capabilities of even the most powerful versions of today’s computers imaginable. Mathematicians and computer scientists attempt to determine whether a problem is solvable by trying them out on an imaginary machine.</p>
<h2>An imaginary computing machine</h2>
<p>The modern notion of an algorithm, known as a Turing machine, was formulated in 1936 by British mathematician <a href="https://www.britannica.com/biography/Alan-Turing/Computer-designer">Alan Turing</a>. It’s an imaginary device that imitates how arithmetic calculations are carried out with a pencil on paper. The Turing machine is the template all computers today are based on.</p>
<p>To accommodate computations that would need more paper if done manually, the supply of imaginary paper in a <a href="https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/turing-machine/one.html">Turing machine</a> is assumed to be unlimited. This is equivalent to an imaginary limitless ribbon, or “tape,” of squares, each of which is either blank or contains one symbol. </p>
<p>The machine is controlled by a finite set of rules and starts on an initial sequence of symbols on the tape. The operations the machine can carry out are moving to a neighboring square, erasing a symbol and writing a symbol on a blank square. The machine computes by carrying out a sequence of these operations. When the machine finishes, or “halts,” the symbols remaining on the tape are the output or result. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/dNRDvLACg5Q?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">What is a Turing machine?</span></figcaption>
</figure>
<p>Computing is often about decisions with yes or no answers. By analogy, a medical test (type of problem) checks if a patient’s specimen (an instance of the problem) has a certain disease indicator (yes or no answer). The instance, represented in a Turing machine in digital form, is the initial sequence of symbols. </p>
<p>A problem is considered “solvable” if a Turing machine can be designed that halts for every instance whether positive or negative and correctly determines which answer the instance yields. </p>
<h2>Not every problem can be solved</h2>
<p>Many problems are solvable using a Turing machine and therefore can be solved on a computer, while many others are not. For example, the domino problem, a variation of the tiling problem formulated by Chinese American mathematician <a href="https://digitalcommons.rockefeller.edu/faculty-members/109/">Hao Wang</a> in 1961, is not solvable. </p>
<p>The task is to use a set of dominoes to cover an entire grid and, following the rules of most dominoes games, matching the number of pips on the ends of abutting dominoes. It turns out that there is no algorithm that can start with a set of dominoes and determine whether or not the set will completely cover the grid.</p>
<h2>Keeping it reasonable</h2>
<p>A number of solvable problems can be solved by algorithms that halt in a reasonable amount of time. These “<a href="https://mathworld.wolfram.com/PolynomialTime.html">polynomial-time algorithms</a>” are efficient algorithms, meaning it’s practical to use computers to solve instances of them.</p>
<p>Thousands of other solvable problems are not known to have polynomial-time algorithms, despite ongoing intensive efforts to find such algorithms. These include the Traveling Salesman Problem. </p>
<p>The Traveling Salesman Problem asks whether a set of points with some points directly connected, called a graph, has a path that starts from any point and goes through every other point exactly once, and comes back to the original point. Imagine that a salesman wants to find a route that passes all households in a neighborhood exactly once and returns to the starting point. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/xi5dWND499g?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The Traveling Salesman Problem quickly gets out of hand when you get beyond a few destinations.</span></figcaption>
</figure>
<p>These problems, called <a href="https://www.mathsisfun.com/sets/np-complete.html">NP-complete</a>, were independently formulated and shown to exist in the early 1970s by two computer scientists, American Canadian <a href="https://amturing.acm.org/award_winners/cook_n991950.cfm">Stephen Cook</a> and Ukrainian American <a href="https://academickids.com/encyclopedia/index.php/Leonid_Levin">Leonid Levin</a>. Cook, whose work came first, was awarded the 1982 Turing Award, the highest in computer science, for this work.</p>
<h2>The cost of knowing exactly</h2>
<p>The best-known algorithms for NP-complete problems are essentially searching for a solution from all possible answers. The Traveling Salesman Problem on a graph of a few hundred points would take years to run on a supercomputer. Such algorithms are inefficient, meaning there are no mathematical shortcuts.</p>
<p>Practical algorithms that address these problems in the real world can only offer approximations, though <a href="https://theconversation.com/planning-the-best-route-with-multiple-destinations-is-hard-even-for-supercomputers-a-new-approach-breaks-a-barrier-thats-stood-for-nearly-half-a-century-148308">the approximations are improving</a>. Whether there are efficient polynomial-time algorithms that can <a href="https://www.claymath.org/millennium-problems/p-vs-np-problem">solve NP-complete problems</a> is among the <a href="https://www.claymath.org/millennium-problems/millennium-prize-problems">seven millennium open problems</a> posted by the Clay Mathematics Institute at the turn of the 21st century, each carrying a prize of US$1 million.</p>
<h2>Beyond Turing</h2>
<p>Could there be a new form of computation beyond Turing’s framework? In 1982, American physicist <a href="http://www.richardfeynman.com/">Richard Feynman</a>, a Nobel laureate, put forward the idea of computation based on quantum mechanics. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/jHoEjvuPoB8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">What is a quantum computer?</span></figcaption>
</figure>
<p>In 1995, Peter Shor, an American applied mathematician, presented a quantum algorithm to <a href="https://www.geeksforgeeks.org/shors-factorization-algorithm/">factor integers in polynomial time</a>. Mathematicians believe that this is unsolvable by polynomial-time algorithms in Turing’s framework. Factoring an integer means finding a smaller integer greater than 1 that can divide the integer. For example, the integer 688,826,081 is divisible by a smaller integer 25,253, because 688,826,081 = 25,253 x 27,277. </p>
<p>A major algorithm called the <a href="https://www.geeksforgeeks.org/rsa-algorithm-cryptography/">RSA algorithm</a>, widely used in securing network communications, is based on the computational difficulty of factoring large integers. Shor’s result suggests that quantum computing, should it become a reality, will <a href="https://theconversation.com/quantum-computers-threaten-our-whole-cybersecurity-infrastructure-heres-how-scientists-can-bulletproof-it-196065">change the landscape of cybersecurity</a>. </p>
<p>Can a full-fledged quantum computer be built to factor integers and solve other problems? Some scientists believe it can be. Several groups of scientists around the world are working to build one, and some have already built small-scale quantum computers. </p>
<p>Nevertheless, like all novel technologies invented before, issues with quantum computation are almost certain to arise that would impose new limits.</p><img src="https://counter.theconversation.com/content/191930/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jie Wang does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>In the age of AI, people might wonder if there’s anything computers can’t do. The answer is yes. In fact, there are numerous problems that are beyond the reach of even the most powerful computers.Jie Wang, Professor of Computer Science, UMass LowellLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1873262022-09-14T12:21:25Z2022-09-14T12:21:25ZLies are more common on laptops than on phones – how devices may shape our behavior when bargaining with strangers<figure><img src="https://images.theconversation.com/files/482528/original/file-20220902-12-14z84h.jpg?ixlib=rb-1.1.0&rect=160%2C185%2C8082%2C5302&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A deceptive device? </span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/cropped-shot-of-womans-hand-typing-on-computer-royalty-free-image/1309760275">d3sign/Moment via Getty Images</a></span></figcaption></figure><p><em>The <a href="https://theconversation.com/us/topics/research-brief-83231">Research Brief</a> is a short take about interesting academic work.</em> </p>
<h2>The big idea</h2>
<p>People appear to be more willing to lie for personal gain when they use a laptop versus a smartphone, our <a href="https://doi.org/10.1108/IJCMA-10-2021-0157">new peer-reviewed research</a> shows. Given that the two devices have nearly identical technical capabilities – they’re both boxes with electronic brains – this surprised us and highlights the psychological impact of technology.</p>
<p>Our first in a planned series of studies was a version of what economists call the <a href="https://www.sciencedirect.com/topics/neuroscience/ultimatum-game">ultimatum game</a>. In the take-it-or-leave-it exercise, one player is told they’ll receive a certain sum of money, some of which they must split with a partner. But they can tell their partner whatever they choose about the total sum and how much of it they’re willing to offer – allowing them to lie and keep more of the kitty for themselves. However, the partner must agree to the offered sum for either of them to get any money. </p>
<p>In our version, we told 137 graduate students to imagine they’d share US$125 with a fellow student, if their randomly assigned partner agreed to the deal. Half of them used a laptop; the rest participated with their smartphone. </p>
<p>While the vast majority of participants fibbed at least a little, laptop users were much more likely to lie – and by a lot more. Eighty-two percent of laptop participants were deceptive, compared with 62% of phone users, and on average claimed the pot was $20 less. </p>
<p>Although this was hypothetical and didn’t involve real money, previous <a href="http://www.doi.org/10.1037/a0018627">research by us</a> and <a href="http://www.doi.org/10.1177/002200277802200102">other</a> <a href="http://www.doi.org/10.1037/0022-3514.37.6.833">scholars</a> shows that these scenarios are good at predicting actual behavior. </p>
<p>To see if our finding held up in a more real-world scenario, we devised a negotiation experiment in which two people were told to barter over the purchase price of an imaginary semiconductor factory one of them owned. We split 222 students into buyers and sellers. Buyers were confidentially told that the market value of the property was estimated at $21 million. </p>
<p>We then asked buyers to tell sellers what they thought was the fair market value of the property and make an initial offer. Like in the first experiment, about half of the students used their phones and the others negotiated on laptops. </p>
<p>Again, laptop users were more deceptive. On average, they told sellers the fair value was $16.7 million – lowballing it by over $4 million – compared with $18.1 million for phone participants. In both cases, their actual offers were only slightly higher than what they said was the market value.</p>
<p>To find out what’s going on, we asked participants of a separate study about their associations with each device and found a consistent pattern. Phones triggered associations of friends and family, and laptops led to thoughts of work, success and accomplishments – which <a href="http://www.doi.org/10.1007/s10551-020-04729-5">previous research</a> has shown can trigger unethical behavior.</p>
<h2>Why it matters</h2>
<p>People’s use of technology in decision-making can subtly yet fundamentally shift the way our brains work. </p>
<p>In past work, we found that people <a href="https://doi.org/10.1037/a0018627">lie more frequently</a>, <a href="https://doi.org/10.1007/s11211-008-0084-x">cooperate less</a> and <a href="http://www.doi.org/10.1016/j.obhdp.2005.07.001">evaluate others more negatively</a> when they conduct tasks virtually as opposed to in person, with physical tools like pens and paper. </p>
<p>While studies like ours can’t perfectly predict how behavior will play out in real life, these experiments do offer more evidence of the subtle ways technology can alter human behavior. </p>
<h2>What still isn’t known</h2>
<p>We don’t know whether our findings would hold for other tasks and within the context of existing relationships. Even within our experiments, other factors may be affecting people’s choice to lie, such as different screen sizes or locations.</p>
<p>Our research shows the continued need to assess how technological tools are used in real settings, including the unconscious changes these devices might have on daily decisions and ethical standards.</p><img src="https://counter.theconversation.com/content/187326/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Terri R. Kurtzberg receives funding from Rutgers University. </span></em></p><p class="fine-print"><em><span>Charles Naquin receives funding from DePaul University. </span></em></p><p class="fine-print"><em><span>Mason Ameri does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A new study found that the device people used to communicate in a negotiation made a big difference in how likely they were to deceive for personal gain.Terri R. Kurtzberg, Associate Professor of Management and Global Business, Rutgers University - NewarkCharles Naquin, Associate Professor of Management, DePaul UniversityMason Ameri, Associate Professor of Professional Practice, Rutgers University - NewarkLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1857052022-07-04T02:13:39Z2022-07-04T02:13:39ZWhy we talk about computers having brains (and why the metaphor is all wrong)<figure><img src="https://images.theconversation.com/files/471320/original/file-20220628-23-kcum6o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A brainy machine? Shakey, the world’s first AI-based robot.</span> <span class="attribution"><a class="source" href="https://www.sri.com/case-study/the-man-the-myth-the-legend-meet-shakey-the-robot-the-worlds-first-ai-based-robot/">SRI International</a></span></figcaption></figure><p>It is a truth, universally acknowledged, that the machines are taking over. What is less clear is whether the machines know that. Recent claims by a Google engineer that the LaMBDA AI Chatbot might be conscious made <a href="https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/">international headlines</a> and sent <a href="https://theconversation.com/is-googles-lamda-conscious-a-philosophers-view-184987">philosophers</a> into a <a href="https://theconversation.com/a-google-software-engineer-believes-an-ai-has-become-sentient-if-hes-right-how-would-we-know-185024">tizz</a>. Neuroscientists and linguists were <a href="https://theconversation.com/googles-powerful-ai-spotlights-a-human-cognitive-glitch-mistaking-fluent-speech-for-fluent-thought-185099">less enthused</a>.</p>
<p>As AI makes greater gains, debate about the technology moves from the hypothetical to the concrete and from the future to the present. This means a broader cross-section of people – not just philosophers, linguists and computer scientists but also policy-makers, politicians, judges, lawyers and law academics – need to form a more sophisticated view of AI.</p>
<p>After all, how policy-makers talk about AI is already shaping decisions about how to regulate that technology. </p>
<p>Take, for example, the case of <em><a href="https://www.dentons.com/en/insights/articles/2022/may/17/full-federal-court-of-australia-holds-that-an-artificial-intelligence#:%7E:text=The%20Full%20Court%20of%20the,inventor%E2%80%9D%20on%20a%20patent%20application.">Thaler v Commissioner of Patents</a></em>, which was launched in the Federal Court of Australia after the commissioner for patents rejected an application naming an AI as an inventor. When Justice Beech disagreed and allowed the application, he made two findings. </p>
<p>First, he found that the word “inventor” simply described a function and could be performed either by a human or a thing. Think of the word “dishwasher”: it might describe a person, a kitchen appliance, or even an enthusiastic dog.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/8TeLaKs06OM?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Nor does the word “dishwasher” necessarily imply that the agent is good at its job…</span></figcaption>
</figure>
<p>Second, Justice Beech used the metaphor of the brain to explain what AI is and how it works. Reasoning by analogy with human neurons, he found that the AI system in question could be considered autonomous, and so might meet the requirements of an inventor.</p>
<p>The case raises an important question: where did the idea that AI is like a brain come from? And why is it so popular?</p>
<h2>AI for the mathematically challenged</h2>
<p>It is understandable that people with no technical training might rely on metaphors to understand complex technology. But we would hope that policy-makers might develop a slightly more sophisticated understanding of AI than the one we get from Robocop.</p>
<p>My research considered how law academics talk about AI. One significant challenge for this group is that they are frequently maths-phobic. As the legal scholar <a href="https://oxford.universitypressscholarship.com/view/10.1093/oso/9780195178135.001.0001/isbn-9780195178135">Richard Posner</a> argues, the law </p>
<blockquote>
<p>provides a refuge for bright youngsters who have “math block”, though this usually means they shied away from math and science courses because they could get higher grades with less work in verbal fields.</p>
</blockquote>
<p>Following Posner’s insight I reviewed all uses of the term “neural network” – the usual label for a common kind of AI system – published in a set of Australian law journals between 2015 and 2021. </p>
<p>Most papers made some attempt to explain what a neural network was. But only three of the nearly 50 papers attempted to engage with the underlying mathematics beyond a broad reference to statistics. Only two papers used visual aids to assist in their explanation, and none at all made use of the computer code or mathematical formulas central to neural networks.</p>
<p>By contrast, two-thirds of the explanations referred to the “mind” or biological neurons. And the overwhelming majority of those made a <em>direct</em> analogy. That is, they suggested AI systems actually replicated the function of human minds or brains. The metaphor of the mind is clearly more attractive than engaging with the underlying maths.</p>
<p>It is little wonder, then, that our policy-makers and judges – like the general public – make such heavy use of these metaphors. But the metaphors are leading them astray.</p>
<h2>Where did the idea that AI is like the brain come from?</h2>
<p>Understanding what produces intelligence is an ancient philosophical problem that was ultimately taken up by the science of psychology. An influential statement of the problem was made in William James’ 1890 book <em>Principles of Psychology</em>, which set early scientific psychologists the task of identifying a one-to-one correlation between a mental state and a physiological state in the brain.</p>
<p>Working in the 1920s, neurophysiologist Warren McCulloch attempted to solve this “mind/body problem” by proposing a “psychological theory of mental atoms”. In the 1940s he joined Nicholas Rashevsky’s influential biophysics group, which was attempting to bring the mathematical techniques used in physics to bear on the problems of nueroscience. </p>
<p>Key to these efforts were attempts to build simplified models of how biological neurons might work, which could then be refined into more sophisticated, mathematically rigorous explanations. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/were-told-ai-neural-networks-learn-the-way-humans-do-a-neuroscientist-explains-why-thats-not-the-case-183993">We're told AI neural networks 'learn' the way humans do. A neuroscientist explains why that's not the case</a>
</strong>
</em>
</p>
<hr>
<p>If you have vague recollections of your high school physics teacher trying to explain the motion of particles by analogy with billiard balls or long metal slinkies, you get the general picture. Start with some very simple assumptions, understand the basic relations and work out the complexities later. In other words, <a href="https://en.wikipedia.org/wiki/Spherical_cow">assume a spherical cow</a>.</p>
<p>In 1943, McCulloch and logician Walter Pitts proposed a simple model of neurons meant to explain the “heat illusion” <a href="https://en.wikipedia.org/wiki/Thermal_grill_illusion">phenomenon</a>. While it was ultimately an unsuccessful picture of how neurons work – McCulloch and Pitts later abandoned it – it was a very helpful tool for designing logic circuits. Early computer scientists adapted their work into what is now known as logic design, where the naming conventions – “neural networks” for example – have persisted to this day.</p>
<p>That computer scientists still use terms like these seems to have fuelled the popular misconception that there is an intrinsic link between certain kinds of computer programs and the human brain. It is as though the simplified assumption of a spherical cow turned out to be a useful way to describe how ball pits should be designed and left us all believing there is some necessary link between children’s play equipment and dairy farming.</p>
<p>This would be not much more than a curiosity of intellectual history were it not the case that these misconceptions are shaping our policy responses to AI.</p>
<p>Is the solution to force lawyers, judges and policy-makers to pass high school calculus before they start talking about AI? Certainly they would object to any such proposal. But in the absence of better mathematical literacy we need to use better analogies.</p>
<p>While the Full Federal Court has since overturned Justice Beech’s decision in <em>Thaler</em>, it specifically noted the need for policy development in this area. Without giving non-specialists better ways of understanding and talking about AI, we’re likely to continue to have the same challenges.</p><img src="https://counter.theconversation.com/content/185705/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tomas Fitzgerald has received funding from the WA Bar Association. He is a member of WA Labor and the NTEU. </span></em></p>It’s time we stopped using the brain as shorthand for machinesTomas Fitzgerald, Lecturer in Law, Curtin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1737162022-02-20T17:26:01Z2022-02-20T17:26:01ZAshamed of asking for technical support? You are not alone!<p>The spread of new technologies tied to a race for innovation, products and services’ growing interdependence and, more generally, <a href="https://theconversation.com/will-services-be-the-saviour-of-manufacturing-70081">the transformation of products into services</a> can also make them more complex to use. To help consumers, companies are introducing a wealth of instructions, tutorials and pictograms that can be difficult to understand, all too often leaving consumers even more <a href="https://theconversation.com/confusopoly-why-companies-are-motivated-to-deliberately-confuse-39563">confused</a>.</p>
<p>The scene is all too familiar: who has never scratched their head in front of a machine, a new device or trying to complete an online booking? While many innovations are often designed and implemented to ease the consumer’s daily life, they may also be the source of new problems. Consumers adopt various strategies to cope with these difficulties of use: online tutorials, help from neighbours or friends, but many, perhaps surprisingly, also end up dropping their new product or service altogether.</p>
<p>The reasons for not seeking help may be psychological. In the field of social psychology, over the last 40 years, researchers have been exploring help-seeking in various contexts, mostly focusing on medical and psychological help or on help-seeking in the classroom. It appears that not everyone is <a href="https://www.washingtonpost.com/lifestyle/wellness/asking-help-coronavirus-hard-shame/2020/04/17/e1d3ef90-7e91-11ea-8013-1b6da0e4a2b7_story.html">comfortable asking for help</a> and that some individuals systematically seem to avoid seeking help. </p>
<p>Indeed, help-seeking may be perceived as threatening, as it may call into question the applicant’s personal competence in his or her own eyes. But he or she may also fear appearing incompetent in the eyes of the helper. Help-seeking also conflicts with important values for Westerners: autonomy and control. Finally, it may restrict one’s freedom of choice, for example when feeling forced to accept a commercial offer in return for the help given.</p>
<h2>Fear of appearing incompetent</h2>
<p>In our article to be published in the journal <a href="https://journals.sagepub.com/home/rme"><em>Recherches et Application en Marketing</em></a>, we explored one of the possible responses for the modern troubled consumer: seeking the supplier’s help. Although it appears to be a fairly obvious solution, it is not often used by consumers.</p>
<p>Through a series of studies, we sought to understand whether avoiding to seek help exists in a consumption context when a customer finds it difficult to use a product or service. A qualitative study and four quantitative studies (samples between 150 and 450 individuals) led us to the conclusion that more than a third – a significant and seemingly large proportion – of consumers tend to avoid asking for help.</p>
<p>These studies also enabled us to build a psychometric measurement tool to assess the tendency of each individual to be more or less avoidant. The common behaviour of help-seeking avoidance is based on two dimensions, stemming from the qualitative part of the study. The first dimension is the refusal to ask for help because it causes embarrassment or even shame for the asker, as one consumer we interviewed explained:</p>
<blockquote>
<p>“Sometimes you feel ashamed, because you are afraid that the person you are talking to might think you don’t understand anything.”</p>
</blockquote>
<p>The second dimension lies in the evaluation of the interlocutor. More specifically, in both their perceived willingness and their perceived ability to help, as illustrated by this statement of a consumer:</p>
<blockquote>
<p>“[Companies] are all the same, they are always rude and they never solve your problems.”</p>
</blockquote>
<p>These two dimensions contribute to explain consumers’ intention to seek help or not. This research also highlights the relationship between the avoidance tendency and specific psychological traits such as an external <a href="https://dictionary.apa.org/locus-of-control">locus of control</a> (the tendency to attribute the cause of events outside one’s control to others, a specific context, for example), <a href="https://dictionary.apa.org/state-orientation">state orientation</a> (the tendency not to act) and <a href="https://www.nhs.uk/mental-health/conditions/social-anxiety/">social anxiety</a>.</p>
<h2>Companies can take action</h2>
<p>Difficulties encountered are admittedly damaging for the consumer, but they are also ultimately damaging for the company. Indeed, a customer who cannot have full use of one’s product may abandon not only the product, but the brand. One may also express dissatisfaction or frustration and spread negative word of mouth. Incidentally, there is a high failure rate for high-tech product launches.</p>
<p>What can companies do to ease and encourage requests from their customers? First of all, they can act on the potential embarrassment. Indeed, it appears that help requested through a screen leads to less discomfort. In this respect, live chat is a very interesting tool to encourage the triggering of requests.</p>
<p>In addition, companies can communicate about their willingness and ability to assist their customers – and dedramatise assistance requests. Indeed, many companies still do not advertise on means available to contact them, often for fear of having to manage too many customer contacts. According to our research, this is interpreted by consumers as unwillingness to help.</p>
<p>Moreover, communicating the positive results of customers requesting help or, more generally, of contacts with the customer service would also be an effective lever. Finally, in addition to being beneficial for customer satisfaction, requests for help can have another positive impact: for example, they allow improvements to be made to products and services.</p><img src="https://counter.theconversation.com/content/173716/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ben Voyer has received funding from the Cartier - ESCP - HEC Paris Turning Points Chair</span></em></p><p class="fine-print"><em><span>Marion Sanglé-Ferrière ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>One person out of three does not dare to turn to the supplier in case of problems. Feeling of shame, as well as doubts about the ability of the person they are dealing explain that figure.Marion Sanglé-Ferrière, Maître de conférence en marketing, CY Cergy Paris UniversitéBen Voyer, Cartier Chaired Professor of Behavioural Sciences, Full Professor, Department of Entrepreneurship, ESCP Business SchoolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1758772022-01-28T16:35:54Z2022-01-28T16:35:54ZIntel can’t even grow profits during a global chip shortage – where did it all go wrong?<figure><img src="https://images.theconversation.com/files/443166/original/file-20220128-23-1d29d2t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Intel Inside no longer means what it once did. </span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/0uXzoEzYZ4I">Slejven Djurakovic</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>American chip-making giant Intel is a shadow of its former self. Despite the <a href="https://www.zdnet.com/article/heres-what-analysts-expect-from-chip-shortages-in-2022/">global semiconductor shortage</a>, which has <a href="https://finance.yahoo.com/news/3-chip-stocks-watch-semiconductor-220000966.html">boosted rival</a> chipmakers, Intel is making less money than a year ago with net income down 21% year over year to <a href="https://www.wsj.com/articles/intel-wins-eu-antitrust-appeal-as-court-annuls-1-2-billion-fine-11643193031">US$4.6 billion</a> (£3.4 billion). Unfortunately, this is an ongoing trend. </p>
<p>Intel was the world’s largest chipmaker until 2021, when it was <a href="https://asia.nikkei.com/Business/Tech/Semiconductors/Samsung-overtook-Intel-as-top-chip-seller-in-2021">dethroned by Samsung</a>. Though Samsung’s main business is memory chips, which is a different segment of the market to Intel’s microprocessors, it is sign of Intel’s decline. We’ve been tracking global companies’ future-readiness at the International Institute for Management Development (IMD), and Intel now <a href="https://www.imd.org/future-readiness-indicator/home/technology/">comes out 16th</a> in the technology sector. </p>
<p>There are two fundamental issues, <a href="https://www.barrons.com/articles/intel-bitcoin-mining-chips-51642617554">according to</a> Matt Bryson, an analyst at Wedbush Securities: “[Intel] fell behind AMD in chip design and Taiwan Semiconductor (TSMC) in manufacturing.” During the most recent earnings call with analysts, CEO Pat Gelsinger had to <a href="https://seekingalpha.com/article/4481909-intel-corporation-intc-ceo-pat-gelsinger-on-q4-2021-results-earnings-call-transcript">concede that</a> the technology in Intel’s data-centre processors hadn’t been improved in five years. In his words, it was “an embarrassing thing to say”. </p>
<p>How did this happen to a company that for many years was well ahead of its competition, and what are the chances of a turnaround?</p>
<h2>Intel’s in-house model</h2>
<p>Intel used to be the undisputed king of microprocessors. PCs were made by many companies, but these were effectively just brand names. The prowess of the machines depended on whether they had an <a href="https://theconversation.com/happy-50th-birthday-intel-you-look-a-lot-like-the-next-kodak-100065">“Intel inside”</a>. </p>
<p>Here is how you compete as a chipset manufacturer: you etch more transistors on a slice of silicon wafer. To achieve this, Intel outspent its rivals on R&D and attracted the best scientists. But most importantly, it kept full control of both product design and manufacturing. </p>
<p>Intel’s engineers – from research to design to manufacturing – have always worked as a <a href="https://www.extremetech.com/computing/127987-deliberate-excellence-why-intel-leads-the-world-in-semiconductor-manufacturing">close in-house team</a>. In contrast, fellow US rivals like Qualcomm, Nvidia and AMD, have either shed their manufacturing capacity or never had it in the first place. They outsource to suppliers such as TSMC and other third-party foundries for the same reason that most of the stuff sold in Walmart is made in China: it’s cheaper.</p>
<p><strong>Share performances of leading chipmakers, 2019-22</strong></p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/443162/original/file-20220128-25-1dqrpxv.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Chart showing share performance of global chipmakers since 2019" src="https://images.theconversation.com/files/443162/original/file-20220128-25-1dqrpxv.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/443162/original/file-20220128-25-1dqrpxv.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=341&fit=crop&dpr=1 600w, https://images.theconversation.com/files/443162/original/file-20220128-25-1dqrpxv.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=341&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/443162/original/file-20220128-25-1dqrpxv.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=341&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/443162/original/file-20220128-25-1dqrpxv.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=429&fit=crop&dpr=1 754w, https://images.theconversation.com/files/443162/original/file-20220128-25-1dqrpxv.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=429&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/443162/original/file-20220128-25-1dqrpxv.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=429&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Orange = Nvidia, yellow = TSMC, turquoise = Qualcomm, purple = Samsung, blue = Intel.</span>
<span class="attribution"><span class="source">Trading View</span></span>
</figcaption>
</figure>
<p>The challenge with outsourcing manufacturing is that your suppliers are probably not in the same building as you. Meetings won’t happen at the watercoolers or in the staff cafeteria. It takes scheduling and coordination. There’s bureaucracy. It’s hard to be on the same page.</p>
<p>The problems that this can cause can be all too evident – for a long while, TSMC and Nvidia would be <a href="https://www.extremetech.comcomputing/123529-nvidia-deeply-unhappy-with-tsmc-claims-22nm-essentially-worthless">blaming each other</a> for manufacturing issues, for instance. For years, Intel’s one-team approach enabled it to pull further and further away from the competition, with processors that were the most powerful. Yet what happened next was the classic disruption. </p>
<h2>The great library of Taiwan</h2>
<p>When mobile <a href="https://theconversation.com/happy-50th-birthday-intel-you-look-a-lot-like-the-next-kodak-100065">took off</a>, the chipset didn’t require as much computing power as those in a laptop or PC, since the priority was energy-saving to extend battery life on a single charge. As Intel was in the business of selling top-quality chips for high margins, it left its rivals to supply chipsets for this new market. As a result, Intel got locked into selling ever more expensive and power-guzzling CPUs for PCs. </p>
<p>With Qualcomm and Apple increasing orders to TSMC to supply Androids and iPhones, the Taiwanese supplier had to master remote work many years before the rest of us. It built up a formidable intellectual property (IP) library online, containing not only its own IP but also that of other suppliers in the value chain. </p>
<p>TSMC could now quickly tell its customers what was possible from a manufacturing perspective and encode such knowledge into design rules. Transparency was total. Its customers could take what was available from the menu and stretch their product design to the limit.</p>
<p>TSMC’s library has gradually become the industry’s largest. The best part is that workflow coordination is done online in a “virtual foundry” system that involves performance simulation, computer modelling and instant feedback. With virtual workflow that improves month after month, year after year, TSMC has steadily neutralised Intel’s advantages. </p>
<h2>Risk and demand</h2>
<p>TSMC doesn’t have to shoulder the risks of launching a new product. It just needs to excel in manufacturing, because if a Qualcomm product fails, AMD’s may take off. TSMC can switch capacity from one client to another. Risk is mitigated when demand is pooled.</p>
<p>For chip designers, outsourcing to TSMC has gradually meant they can afford to be fast-moving and bold in product design. If a new chip doesn’t sell, they can pull the plug without having to worry about the factory: that’s TSMC’s problem. </p>
<p>That’s how Nvidia has evolved beyond deploying graphic processors only in the gaming sector; it’s now leading in designing chipsets for AI applications. And AMD, an underdog close to bankruptcy in 2014, now makes some of the most powerful processors. </p>
<p>Intel, meanwhile, still needs to ensure that every product wins with enough volume to feed its network of factories, each costing billions of dollars. This has made the company more and more conservative. And having stuck to supplying chips to PCs, servers and data centres, it is struggling to innovate. Tellingly, the company’s gross margin – total revenue minus the cost of production – <a href="https://www.macrotrends.net/stocks/charts/INTC/intel/gross-margin">has been sliding</a> for nearly a decade. The biggest danger for a technology company is that it’s not developing leading-edge products fast enough, backsliding into selling commodities. </p>
<p>The big issue for Pat Gelsinger is, how can a company built on self-reliance transform its culture quickly? He is <a href="https://www.cnbc.com/2021/03/23/intel-makes-foundry-strategy-shift-under-new-ceo-pat-gelsinger-.html">talking about</a> building a foundry service to regain scale in manufacturing. But the question is, how can Intel become a collaborative organisation not in a decade, but in a year? </p>
<p>Andy Grove, the legendary late chairman of Intel got it right. He said: “Only the paranoid survive.”</p><img src="https://counter.theconversation.com/content/175877/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Howard Yu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The story of how America’s great chipmaker’s main strength became an albatross around its neck.Howard Yu, Professor of Management and Innovation, International Institute for Management Development (IMD)Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1728502021-12-01T13:36:14Z2021-12-01T13:36:14ZHow the US census led to the first data processing company 125 years ago – and kick-started America’s computing industry<figure><img src="https://images.theconversation.com/files/434761/original/file-20211130-27-1uk0tsc.jpg?ixlib=rb-1.1.0&rect=0%2C7%2C2394%2C2307&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">This electromechanical machine, used in the 1890 U.S. census, was the first automated data processing system.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/niallkennedy/6414584">Niall Kennedy/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>The U.S. Constitution requires that a population count be conducted at the beginning of every decade. </p>
<p>This census has always been charged with political significance, and continues to be. That’s clear from <a href="https://www.cnn.com/2020/09/09/politics/census-challenges/index.html">the controversies in the run-up to the 2020 census</a>. </p>
<p>But it’s less widely known how important the census has been in developing the U.S. computer industry, a story that I tell in my book, “<a href="https://jhupbooks.press.jhu.edu/title/republic-numbers">Republic of Numbers: Unexpected Stories of Mathematical Americans through History</a>.” That history includes the founding of the first automated data processing company, the <a href="https://www.smithsonianmag.com/smithsonian-institution/herman-holleriths-tabulating-machine-2504989/">Tabulating Machine Company</a>, 125 years ago on December 3, 1896.</p>
<h2>Population growth</h2>
<p>The only use of the census clearly specified in the Constitution is to allocate seats in the House of Representatives. More populous states get more seats. </p>
<p>A minimalist interpretation of the census mission would require reporting only the overall population of each state. But the census has never confined itself to this.</p>
<p>A complicating factor emerged right at the beginning, with the Constitution’s distinction between “free persons” and “<a href="http://www.digitalhistory.uh.edu/disp_textbook.cfm?smtID=3&psid=163">three-fifths of all other persons</a>.” This was the Founding Fathers’ infamous mealy-mouthed compromise between those states with a large number of enslaved persons and those states where relatively few lived. </p>
<p><a href="https://www.census.gov/history/www/through_the_decades/index_of_questions/1790_1.html">The first census</a>, in 1790, also made nonconstitutionally mandated distinctions by age and sex. In subsequent decades, many other personal attributes were probed as well: occupational status, marital status, educational status, place of birth and so on.</p>
<p>As the country grew, each census required greater effort than the last, not merely to collect the data but also to compile it into usable form. <a href="https://www.jstor.org/stable/24987147?seq=1#page_scan_tab_contents">The processing of the 1880 census</a> was not completed until 1888. </p>
<p>It had become a mind-numbingly boring, error-prone, clerical exercise of a magnitude rarely seen. </p>
<p>Since the population was evidently continuing to grow at a rapid pace, those with sufficient imagination could foresee that processing the 1890 census would be gruesome indeed without some change in procedure. </p>
<p><iframe id="1Onyi" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/1Onyi/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<h2>A new invention</h2>
<p>John Shaw Billings, a physician assigned to assist the Census Office with compiling health statistics, had closely observed the immense tabulation efforts required to deal with the raw data of 1880. He expressed his concerns to a young mechanical engineer assisting with the census, Herman Hollerith, a recent graduate of the Columbia School of Mines. </p>
<p>On Sept. 23, 1884, the U.S. Patent Office recorded a submission from the 24-year-old Hollerith, titled “<a href="https://pdfpiw.uspto.gov/.piw?PageNum=0&docid=00395782&IDKey=73D9506C5930%0D%0A&HomeUrl=http%3A%2F%2Fpatft.uspto.gov%2Fnetacgi%2Fnph-Parser%3FSect1%3DPTO1%2526Sect2%3DHITOFF%2526d%3DPALL%2526p%3D1%2526u%3D%25252Fnetahtml%25252FPTO%25252Fsrchnum.htm%2526r%3D1%2526f%3DG%2526l%3D50%2526s1%3D0395782.PN.%2526OS%3DPN%2F0395782%2526RS%3DPN%2F0395782">Art of Compiling Statistics</a>.”</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="an old black and white photograph showing a man seated at a wooden desk-like machine looking at a bank of indicator dials" src="https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=709&fit=crop&dpr=1 600w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=709&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=709&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=891&fit=crop&dpr=1 754w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=891&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=891&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Hollerith electric tabulating machine in use in 1902.</span>
<span class="attribution"><a class="source" href="https://www.census.gov/history/img/1902_Hollerith_electric_tabulating_machine.jpg">United States Census Bureau</a></span>
</figcaption>
</figure>
<p>By progressively improving the ideas of this initial submission, Hollerith would decisively win an 1889 competition to improve the processing of the 1890 census. </p>
<p>The <a href="https://www.census.gov/history/www/innovations/technology/the_hollerith_tabulator.html">technological solutions</a> devised by Hollerith involved a suite of mechanical and electrical devices. The first crucial innovation was to translate data on handwritten census tally sheets to patterns of holes punched in cards. As Hollerith phrased it, in the 1889 revision of his patent application,</p>
<blockquote>
<p>“A hole is thus punched corresponding to person, then a hole according as person is a male or female, another recording whether native or foreign born, another either white or colored, &c.”</p>
</blockquote>
<p>This process required developing special machinery to ensure that holes could be punched with accuracy and efficiency. </p>
<p>Hollerith then devised a machine to “read” the card, by probing the card with pins, so that only where there was a hole would the pin pass through the card to make an electrical connection, resulting in advance of the appropriate counter. </p>
<p>For example, if a card for a white male farmer passed through the machine, a counter for each of these categories would be increased by one. The card was made sturdy enough to allow passage through the card reading machine multiple times, for counting different categories or checking results.</p>
<p>The count proceeded so rapidly that the <a href="https://play.google.com/books/reader?id=MGZqAAAAMAAJ&pg=GBS.PA1">state-by-state numbers needed for congressional apportionment</a> were certified before the end of November 1890. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=470&fit=crop&dpr=1 600w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=470&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=470&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=590&fit=crop&dpr=1 754w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=590&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=590&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This ‘mechanical punch card sorter’ was used for the 1950 census.</span>
<span class="attribution"><a class="source" href="https://www.census.gov/library/photos/machinists_technicians_5.html">U.S. Census Bureau</a></span>
</figcaption>
</figure>
<h2>Rise of the punched card</h2>
<p>After his census success, <a href="https://www.worldcat.org/title/computer-a-history-of-the-information-machine/oclc/1110437971?referer=br&ht=edition">Hollerith went into business selling this technology</a>. The company he founded, the Tabulating Machine Company, would, after he retired, become International Business Machines - IBM. IBM led the way in perfecting card technology for recording and tabulating large sets of data for a variety of purposes. </p>
<p>By the 1930s, many businesses were using cards for record-keeping procedures, such as payroll and inventory. Some data-intensive scientists, especially astronomers, were also finding the cards convenient. IBM had by then standardized an 80-column card and had developed keypunch machines that would change little for decades. </p>
<p>Card processing became one leg of the mighty computer industry that blossomed after World War II, and IBM for a time would be the third-largest corporation in the world. Card processing served as a scaffolding for vastly more rapid and space-efficient purely electronic computers that now dominate, with little evidence remaining of the old regime. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1334&fit=crop&dpr=1 600w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1334&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1334&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1676&fit=crop&dpr=1 754w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1676&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1676&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A blue IBM punch card.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Blue-punch-card-front.png">Gwern/Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>Those who have grown up knowing computers only as easily portable devices, to be communicated with by the touch of a finger or even by voice, may be unfamiliar with the room-size computers of the 1950s and ’60s, where the primary means of loading data and instructions was by creating a deck of cards at a keypunch machine, and then feeding that deck into a card reader. This persisted as the default procedure for many computers well into the 1980s. </p>
<p><a href="https://www.worldcat.org/title/grace-hopper-navy-admiral-and-computer-pioneer/oclc/19516564&referer=brief_results">As computer pioneer Grace Murray Hopper recalled</a> about her early career, “Back in those days, everybody was using punched cards, and they thought they’d use punched cards forever.”</p>
<p>Hopper had been an important member of the team that created the first commercially viable general-purpose computer, the Universal Automatic Computer, or UNIVAC, one of the card-reading behemoths. Appropriately enough, the first UNIVAC delivered, in 1951, was to the U.S. Census Bureau, still hungry to improve its data processing capabilities.</p>
<p>No, computer users would not use punched cards forever, but they used them through the Apollo Moon-landing program and the height of the Cold War. Hollerith would likely have recognized the direct descendants of his 1890s census machinery almost 100 years later. </p>
<p><em>This is an updated version of an article originally published on October 15, 2019.</em></p>
<p>[ <em>You’re smart and curious about the world. So are The Conversation’s authors and editors.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=youresmart">You can read us daily by subscribing to our newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/172850/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Lindsay Roberts does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>As the country grew, each census required greater effort than the last. That problem led to the invention of the punched card – and the birth of an industry.David Lindsay Roberts, Adjunct Professor of Mathematics, Prince George's Community CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1695872021-10-18T09:38:25Z2021-10-18T09:38:25ZFive tips to stop your computers slowing down after updates<figure><img src="https://images.theconversation.com/files/426529/original/file-20211014-26-ujv3ph.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Many people worry that software updates slow their computer down.</span> <span class="attribution"><span class="source">Rido/Shutterstock</span></span></figcaption></figure><p>“Please download an update” pops up on most people’s computers regularly, but many worry about doing just that in case their computer doesn’t work as well afterwards.</p>
<p>Computer users install software updates to improve the <a href="https://ieeexplore.ieee.org/document/6884473">performance</a> and security of their computers. However, slow downs can happen. There are several reasons why. </p>
<p>If your computer has limited amount of <a href="https://ieeexplore.ieee.org/document/7581073">random access memory</a> (the physical hardware inside a computer that temporarily stores data), then updates could slow your computer down. Nowadays, 4-8GB RAM would be limited, and 16-32GB would be ideal for a general purpose computer. Software updates often require the use of additional RAM. This reduces the amount of RAM available for the <a href="https://www.springer.com/gp/book/9780387914626">operating system</a> to run <a href="https://www.springer.com/gp/book/9783030670832">software</a>. </p>
<p>In addition, most home computers don’t run on powerful <a href="https://www.springer.com/gp/book/9789811656613">processors</a>, which means that installing many software updates can cause slow downs. Software updates will require more processing power, resulting in an overall reduction of the computer’s capability to run the operating system and application software. Installation of new updates will increase the total number of processes to be handled. This results in delays in executing tasks required by the operating system and application software for their smooth running.</p>
<p>But if your computer has a lot of software installed, consuming large amount of computer memory and <a href="https://www.hp.com/us-en/shop/tech-takes/what-is-processor-speed">processing power</a>, then you have limited available resources for running the computer. Installing new software updates further increases the pressure on your computer, and then results in it slowing down. </p>
<p>Meanwhile, if you have a large volume of data stored in your <a href="https://ieeexplore.ieee.org/document/7876257/">computer disk</a>, then updates might also result in a slow down. Because software updates might use your <a href="https://link.springer.com/book/10.1007%2F978-3-319-69907-3">storage space</a> for temporary processing, this could further reduce available disk space. This ultimately results in slower computer performance with each software update.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/your-smartphone-is-not-making-you-dumber-digital-tech-can-enhance-our-cognitive-abilities-164485">Your smartphone is not making you dumber — digital tech can enhance our cognitive abilities</a>
</strong>
</em>
</p>
<hr>
<p>However if your computer has sufficient memory, processor power and disk space, then software updates should result in your computer performing better. Remember, despite being rigorously tested for accuracy and security, operating systems and software may contain loopholes that might be targeted by unauthorised users to gain access of your computer and personal data. Whenever these loopholes are identified, organisations release patches in the form of software updates and installing those updates are very important to keep your computer and data safe.</p>
<p>So what are the simple steps to improve computer performance and avoid it slowing?</p>
<h2>1. Regularly delete temporary files</h2>
<p><a href="https://ieeexplore.ieee.org/document/143626">Temporary files</a> and folders are regularly created on computers by operating system and application software. These files consume computer disk space and often use processing power which can be attributed to the slowing down of a computer. Regularly deleting these files can improve a computer’s overall performance and speed. Temporary files can be deleted safely using <a href="https://www.softwaretestinghelp.com/best-free-pc-cleaner-software/">disk clean up</a> and management tools for popular operating systems including Windows, Macintosh and Linux.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/426667/original/file-20211015-18-1x47ed.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/426667/original/file-20211015-18-1x47ed.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=337&fit=crop&dpr=1 600w, https://images.theconversation.com/files/426667/original/file-20211015-18-1x47ed.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=337&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/426667/original/file-20211015-18-1x47ed.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=337&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/426667/original/file-20211015-18-1x47ed.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/426667/original/file-20211015-18-1x47ed.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/426667/original/file-20211015-18-1x47ed.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Software updates add more security to your computer and not downloading them can leave your computer less secure.</span>
<span class="attribution"><span class="source">Arts Illustrated Studios/Shutterstock</span></span>
</figcaption>
</figure>
<h2>2. Keep startup processes low</h2>
<p>Many software processes such as Dropbox, Skype, Adobe may automatically start up and run in the background when your computer starts up. These processes can be monitored using tools such as the task manager in the Windows operating system. Always keep the number of startup processes as low as possible by disabling the unnecessary ones. Remember, there are some operating system required startup processes that cannot be disabled. </p>
<h2>3. Uninstall unnecessary software</h2>
<p>There might be some software installed in a computer which you rarely use. <a href="https://ieeexplore.ieee.org/document/4222580">Uninstalling</a> these bits of software might improve the performance and speed: the computer gains disk space as well as freeing up some processing power. This can be done using add or remove software features for Windows and moving to the bin in Macs. </p>
<h2>4. Store data in the cloud</h2>
<p>Storing bulk data in the cloud (a network of remote servers) can improve computer performance significantly by saving computer disk space and related file management. It is wise to store big files like music and video there. The only limitation of using <a href="https://ieeexplore.ieee.org/document/5066833">cloud space</a> is that an active internet connection is required to access the files, and you need to pay a provider such as Apple for access. But this can be helpful in keeping your computer running faster.</p>
<h2>5. Increase computer resources</h2>
<p>The ultimate solution to improve a computer’s speed is to increase the resources by adding or replacing <a href="https://ieeexplore.ieee.org/document/7581073">hardware</a> such as computer memory, processor and disk drives. However, it is not always possible to add RAM to all machines and it can be more complex than you feel happy with, so do make sure you have tried the other (easier) solutions first.</p><img src="https://counter.theconversation.com/content/169587/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It can feel like your computer slows down after it updates, here’s why.Omprakash Kaiwartya, Senior Lecturer in Connected Systems, Nottingham Trent UniversityMufti Mahmud, Associate Professor of Cognitive Computing, Nottingham Trent UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1655622021-08-18T14:41:50Z2021-08-18T14:41:50ZHow older people are mastering technology to stay connected after lockdown<figure><img src="https://images.theconversation.com/files/416754/original/file-20210818-15-1oayf6m.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4565%2C3018&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/photo-senior-man-using-smartphone-set-265218128">Tomertu/Shutterstock</a></span></figcaption></figure><p>It’s a well-worn stereotype: the image of an elderly person fiddling with technology that leaves them completely bamboozled. The media often depict older people struggling to use or manage digital technology. While this is often designed to be humorous, it can undermine them as users of technology. And that’s a problem if it turns older people off from trying to engage with digital devices, as it can affect their wellbeing.</p>
<p>Older adults are already at a digital disadvantage: <a href="https://www.ofcom.org.uk/about-ofcom/latest/media/media-releases/2021/digital-divide-narrowed-but-around-1.5m-homes-offline">18% of over-65s</a> do not have internet access. Propelled by enforced isolation, older people increasingly <a href="https://www.theguardian.com/society/2021/mar/10/computers-are-marvellous-older-people-embrace-internet-in-lockdow">turned to technology</a> during the pandemic, but not all were able to connect or communicate with friends and family via the internet. </p>
<p>Of course some older people have less experience of digital technology than others, and unsurprisingly <a href="https://www.cambridge.org/core/journals/ageing-and-society/article/is-it-love-or-loneliness-exploring-the-impact-of-everyday-digital-technology-use-on-the-wellbeing-of-older-adults/F391725A9584A666B1EDCEC68E03BCF3">describe mixed emotions</a> regarding the use of everyday devices such as smartphones, laptops, e-readers and tablets. They also report not having much confidence when it comes to using them. Lack of control, a sense of being overwhelmed, and poor product design can lead to feelings of being incompetent, alone and even trapped. </p>
<p>Yet for those who persevere, the rewards can be plentiful: completing tasks more easily, communicating more effectively, increased independence and a sense of achievement. These things are important if this growing part of the population is to experience ageing in a more positive and empowering way. </p>
<p>Through <a href="https://www.tandfonline.com/doi/full/10.1080/0267257X.2021.1945662?src=">our research</a> we wanted to examine these mixed emotions that older people have about using technology, and how they develop ways to combat the challenges they face. While digital technology has been shown to alleviate pressure on health and social care, until now there’s been a limited understanding of how using technology influences the way older adults live. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/rSWbgNAgAE8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Attitudes towards technology</h2>
<p>While many see digital technology as a challenge to be conquered, there are different ways of overcoming or confronting the obstacles. Some may view the challenge as a personal goal, using instruction manuals or simply trial and error to prevail over software updates, unwanted viruses or junk mail.</p>
<p>Others view digital technology as a collective endeavour, asking friends and family for help. Not only is this the most successful strategy, it also fosters important interaction with others. For example, during the pandemic many <a href="https://www.ofcom.org.uk/about-ofcom/latest/media/media-releases/2021/digital-divide-narrowed-but-around-1.5m-homes-offline">younger people acted</a> as IT support for older friends and relatives.</p>
<p>Not all elderly people have this kind of network, but arguably they can benefit most from greater use of the internet to feel connected and keep loneliness at bay. In these situations, there are useful schemes run by charities such as Age UK, where <a href="https://www.ageuk.org.uk/discover/2021/june/digital-champions/">digital champions</a> can help older adults master technology.</p>
<h2>Adopting strategies</h2>
<p>Understandably frustrations emerge when learning a new skill, but some older people have shown how they overcome their exasperation by developing a relationship with their devices. Naming their tablet or humanising their phone helps to bond older people to technology.</p>
<p>In our study different devices were often referred to as having a personality, gender, or even a mind of their own. This strategy brings a little levity to a situation that could otherwise be stressful.</p>
<p>Once these people become more used to digital technology, familiarity can encourage continued use. With a new device, software and apps they know and understand can be downloaded so that it feels less alien. Similarly, if a touchscreen is problematic, some older people might decide to use a keyboard and mouse instead.</p>
<figure class="align-center ">
<img alt="An older black women on her laptop smiling and looking happy." src="https://images.theconversation.com/files/416759/original/file-20210818-13-dou8fx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/416759/original/file-20210818-13-dou8fx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/416759/original/file-20210818-13-dou8fx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/416759/original/file-20210818-13-dou8fx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/416759/original/file-20210818-13-dou8fx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/416759/original/file-20210818-13-dou8fx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/416759/original/file-20210818-13-dou8fx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Being able to use technology keeps older people connected and reduce feelings of isolation.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/happy-asian-retired-woman-use-notebook-1681881598">M2020/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Breeding confidence</h2>
<p>Using technology at any age can have its pros and its cons, but our research reveals that older adults can offer a unique perspective. Using lifelong wisdom, they can take a step back and acknowledge that technology has its faults. If things go wrong, their judgement and experience is useful in helping to understand that the key to using technology is persistence. One participant, Christopher, 83, said:</p>
<blockquote>
<p>There’s one sure thing: life will come to an end, and technology will always go wrong. My son’s partner sends me texts from their holiday in Tunis. When I try to reply I keep getting ‘no service’ and my message is refused … [but] I know they will be worried if they don’t get a reply. When I was a kid, Tunis was a distant desert war zone, with cinema newsreels a week later … and here’s me now, whingeing about lack of instant contact.</p>
</blockquote>
<p>These findings are significant for technology development, marketing and customer services. Designing technology for older people should be based on their experiences and offer continuity. Digital devices need to include familiar commands, buttons, screens, and add-ons to previous models. This will enhance the ability to get to grips with updates and developments. And in turn this will help foster social connections as well as boost independence and confidence.</p>
<p>Stereotyping of older people also needs to stop. Experience and perspective should be acknowledged, respected and reflected in marketing campaigns. Messaging should underscore the potential benefits of mastering technology, emphasising the importance of being connected and socially involved to a person’s independence.</p>
<p>Finally, customer service should be easily accessible and well versed in issues older people face to provide the necessary support, building on schemes offered by Age UK’s digital champions. Clearly there is a crucial role for friends, family, and communities to ensure that older people remain socially active, engaged and connected through technology. Their wellbeing may depend on it.</p><img src="https://counter.theconversation.com/content/165562/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Older people are often labelled technophobic, but since lockdown the digital divide is closing as more and more use technology to connect with others.Carolyn Wilson-Nash, Lecturer, Marketing and Retail, Stirling Management School, University of StirlingJulie Tinson, Professor of Marketing, University of StirlingLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1614162021-05-25T19:38:59Z2021-05-25T19:38:59ZWe’re just not compatible any more: why Microsoft finally dumped Internet Explorer<figure><img src="https://images.theconversation.com/files/402529/original/file-20210525-21-1976py9.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4076%2C2684&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Glenn Carstens-Peters/Unsplash</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>Tech giant Microsoft recently announced the retirement of its longstanding web browser, Internet Explorer, in favour of its newer product, Microsoft Edge. With support for Internet Explorer only set to last until June 15, 2022, its remaining users have just over a year to find an alternative. But of course, most web users already have.</p>
<p>While the eventual downfall of Internet Explorer was seen as a foregone conclusion by those who monitor web trends, the news might come as an unwelcome surprise for those who are somewhat less up-to-date.</p>
<p>For the most part, though, this news is a whimper rather than a bang — a footnote at the end of an iconic story spanning more than 25 years.</p>
<p>As a current professional in the IT industry, I’ll break down some possible reasons for this decision, and what we can learn from it.</p>
<h2>Searching for the answer</h2>
<p>Almost everyone is familiar with the idea of “googling” something, but there’s no such thing as “microsofting” something. How did Google manage to become synonymous with web searching, while Microsoft, despite its long and pioneering history, failed to become synonymous with anything?</p>
<p>The answer is market share. Google handles <a href="https://gs.statcounter.com/search-engine-market-share">92.24% of web searches</a> — more than 3.5 billion requests a day. Microsoft’s own search engine, Bing, has a paltry 2.29%.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/402485/original/file-20210525-18-balp68.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Graph of search engines' global market share" src="https://images.theconversation.com/files/402485/original/file-20210525-18-balp68.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/402485/original/file-20210525-18-balp68.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/402485/original/file-20210525-18-balp68.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/402485/original/file-20210525-18-balp68.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/402485/original/file-20210525-18-balp68.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/402485/original/file-20210525-18-balp68.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/402485/original/file-20210525-18-balp68.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Here’s why Google is synonymous with searching the web.</span>
<span class="attribution"><span class="source">StatCounter</span></span>
</figcaption>
</figure>
<p>It’s easy to see why users prefer Google’s own web browser, Chrome, over Microsoft’s Internet Explorer, which uses Bing as its default search engine. Users who prefer searching via Google (which is almost everyone) can make Google the default search engine in Internet Explorer. But it’s probably easier just to install Chrome and use Google from there.</p>
<h2>Success breeds complacency; complacency breeds failure</h2>
<p>Microsoft wasn’t always a bit player. Back when the web was in its infancy, it was a market-leading pioneer. Before there were app stores, or 5G, or even widespread personal computers, there were large mainframe computers with “unfriendly” Unix-based operating systems developed in the 1970s.</p>
<p>These systems were about as bare-bones as you can get, with little consideration given to graphics or usability. Unix’s original web browser, Netscape, was similarly no-frills.</p>
<p>This is where Microsoft came in, by focusing on making “personal computers” more personal. With much nicer designs and more intuitive user interfaces, by the time Internet Explorer launched in 1995, Microsoft had cemented itself at the forefront of the digital world.</p>
<figure class="align-center ">
<img alt="1995 Internet Explorer logo" src="https://images.theconversation.com/files/402530/original/file-20210525-14-1hakyy8.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/402530/original/file-20210525-14-1hakyy8.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=198&fit=crop&dpr=1 600w, https://images.theconversation.com/files/402530/original/file-20210525-14-1hakyy8.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=198&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/402530/original/file-20210525-14-1hakyy8.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=198&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/402530/original/file-20210525-14-1hakyy8.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=249&fit=crop&dpr=1 754w, https://images.theconversation.com/files/402530/original/file-20210525-14-1hakyy8.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=249&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/402530/original/file-20210525-14-1hakyy8.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=249&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">You can almost hear the sound of the dial-up modem.</span>
<span class="attribution"><span class="source">Wikimedia Commons</span></span>
</figcaption>
</figure>
<p>But as US Baptist minister and civil rights leader Benjamin E. Mays <a href="https://www.brainyquote.com/quotes/benjamin_e_mays_610662">famously warned</a>, “The tragedy of life is often not in our failure, but rather in our complacency.”</p>
<p>Having established its reputation, Microsoft stopped pushing Internet Explorer’s development, and started venturing elsewhere, continually improving Windows but not its web browser. From that point on, Internet Explorer was always late to the party in introducing innovations such as tabbed browsing and search bars. It fell further into irrelevance and obsolescence.</p>
<h2>Compatibility issues</h2>
<p>Having spent much of my life as a web developer, one of my biggest gripes is the incompatibility of some web browsers. It’s exhausting and demoralising spending hours polishing web pages, only for them not to run properly on some browsers. </p>
<p>This concern even spread to Microsoft’s own in-house developers. In a 2019 <a href="https://techcommunity.microsoft.com/t5/windows-it-pro-blog/the-perils-of-using-internet-explorer-as-your-default-browser/ba-p/331732">blog post</a> titled “The perils of using Internet Explorer as your default browser”, Microsoft’s Chris Jackson warned:</p>
<blockquote>
<p>[…] developers by and large just aren’t testing for Internet Explorer these days. They’re testing on modern browsers.</p>
</blockquote>
<p>The message was clear: web developers don’t get on well with Internet Explorer, so sites that work well on other browsers might not work here — and that problem is only going to get worse.</p>
<p>With Microsoft having lost interest in making sure Internet Explorer keeps up, it has transferred its attention to its new browser, <a href="https://www.microsoft.com/en-us/edge">Microsoft Edge</a>. But the horse may already have bolted. The marketplace is crowded with Google’s Chrome, Apple’s Safari, Mozilla’s Firefox, and numerous open source browsers.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/battle-of-the-browsers-how-the-web-was-won-173">Battle of the browsers: how the web was won</a>
</strong>
</em>
</p>
<hr>
<h2>Phoning it in</h2>
<p>Here’s another key stat that illustrates Internet Explorer’s decline: in 2020, <a href="https://www.perficient.com/insights/research-hub/mobile-vs-desktop-usage">more than two-thirds of all website visits</a> were via a mobile device.</p>
<p>Now, a browser that can sync across multiple platforms is a necessity. In a world of Apple and Android devices, the term “Windows phone” sounds prehistoric — because it pretty much is. Operating system support for Windows phones <a href="https://www.theverge.com/2017/7/11/15952654/microsoft-windows-phone-end-of-support">ended in 2017</a>, just seven years after Microsoft first launched the range.</p>
<figure class="align-center ">
<img alt="Hands using a tablet in front of a laptop" src="https://images.theconversation.com/files/402531/original/file-20210525-17-12mteyk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/402531/original/file-20210525-17-12mteyk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/402531/original/file-20210525-17-12mteyk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/402531/original/file-20210525-17-12mteyk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/402531/original/file-20210525-17-12mteyk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/402531/original/file-20210525-17-12mteyk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/402531/original/file-20210525-17-12mteyk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A browser that works seamlessly across a variety of devices is a must these days.</span>
<span class="attribution"><span class="source">Taras Shypka/Unsplash</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>So, having existed since the dawn of the internet age (or least since the internet went truly mainstream), Internet Explorer has failed in many ways to keep up.</p>
<p>Despite the success of its Surface tablets, Microsoft failed to maintain a foothold in the smartphone market, which may explain its unwillingness to keep developing Internet Explorer. Or maybe it’s the other way around, and Internet Explorer’s clunkiness is the reason no one uses a Windows phone.</p>
<p>But the bottom line is Internet Explorer just lacks the versatility needed by web-savvy users. And as of next year, even the non-savvy users will stop relying on it too.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/we-spent-six-years-scouring-billions-of-links-and-found-the-web-is-both-expanding-and-shrinking-159215">We spent six years scouring billions of links, and found the web is both expanding and shrinking</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/161416/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Vinh Bui does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Once a pioneer of the information age, now stereotyped as the browser of choice for people who are less than web-savvy, the curtain will finally come down on Internet Explorer next year.Vinh Bui, Lecturer, Southern Cross UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1599902021-05-20T12:27:00Z2021-05-20T12:27:00ZShape-shifting computer chip thwarts an army of hackers<figure><img src="https://images.theconversation.com/files/401722/original/file-20210519-19-1m48kfo.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5656%2C3166&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The Morpheus secure processor works like a puzzle that keeps changing before hackers have a chance to solve it.</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/TOOhhlGHOsQ">Alan de la Cruz via Unsplash</a></span></figcaption></figure><p><em>The <a href="https://theconversation.com/us/topics/research-brief-83231">Research Brief</a> is a short take about interesting academic work.</em></p>
<h2>The big idea</h2>
<p>We have developed and tested a <a href="https://doi.org/10.1145/3297858.3304037">secure new computer processor</a> that thwarts hackers by randomly changing its underlying structure, thus making it virtually impossible to hack. </p>
<p>Last summer, 525 security researchers spent three months trying to hack our Morpheus processor as well as others. <a href="https://spectrum.ieee.org/tech-talk/semiconductors/processors/morpheus-turns-a-cpu-into-a-rubiks-cube-to-defeat-hackers">All attempts against Morpheus failed</a>. This study was part of a program sponsored by the U.S. Defense Advanced Research Program Agency to <a href="https://spectrum.ieee.org/tech-talk/computing/embedded-systems/darpa-hacks-its-secure-hardware-fends-off-most-attacks">design a secure processor</a> that could protect vulnerable software. DARPA <a href="https://www.darpa.mil/news-events/2020-01-28">released the results on the program to the public</a> for the first time in January 2021.</p>
<p>A processor is the piece of computer hardware that runs software programs. Since a processor underlies all software systems, a secure processor has the potential to protect any software running on it from attack. Our team at the University of Michigan first developed Morpheus, a secure processor that thwarts attacks by turning the computer into a puzzle, in 2019.</p>
<p>A processor has an architecture – x86 for most laptops and ARM for most phones – which is the set of instructions software needs to run on the processor. Processors also <a href="https://www.computerhope.com/jargon/m/microarchitecture.htm">have a microarchitecture</a>, or the “guts” that enable the execution of the instruction set, the speed of this execution and how much power it consumes.</p>
<p>Hackers need to be intimately familiar with the details of the microarchitecture to <a href="https://theconversation.com/microprocessor-designers-realize-security-must-be-a-primary-concern-98044">graft their malicious code, or malware, onto vulnerable systems</a>. To stop attacks, Morpheus randomizes these implementation details to turn the system into a puzzle that hackers must solve before conducting security exploits. From one Morpheus machine to another, details like the commands the processor executes or the format of program data change in random ways. Because this happens at the microarchitecture level, software running on the processor is unaffected.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/401701/original/file-20210519-19-1t96mso.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a fan on top of a metal square in the middle of a computer circuit board" src="https://images.theconversation.com/files/401701/original/file-20210519-19-1t96mso.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/401701/original/file-20210519-19-1t96mso.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=427&fit=crop&dpr=1 600w, https://images.theconversation.com/files/401701/original/file-20210519-19-1t96mso.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=427&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/401701/original/file-20210519-19-1t96mso.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=427&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/401701/original/file-20210519-19-1t96mso.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=537&fit=crop&dpr=1 754w, https://images.theconversation.com/files/401701/original/file-20210519-19-1t96mso.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=537&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/401701/original/file-20210519-19-1t96mso.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=537&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Morpheus computer processor, inside the square beneath the fan on this circuit board, rapidly and continuously changes its underlying structure to thwart hackers.</span>
<span class="attribution"><span class="source">Todd Austin</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>A skilled hacker could reverse-engineer a Morpheus machine in as little as a few hours, if given the chance. To counter this, Morpheus also changes the microarchitecture every few hundred milliseconds. Thus, not only do attackers have to reverse-engineer the microachitecture, but they have to do it very fast. With Morpheus, a hacker is confronted with a computer that has never been seen before and will never be seen again.</p>
<h2>Why it matters</h2>
<p>To conduct a security exploit, hackers use vulnerabilities in software to get inside a device. Once inside, they <a href="https://theconversation.com/guarding-against-the-possible-spectre-in-every-machine-89825">graft their malware</a> onto the device. Malware is designed to infect the host device to steal sensitive data or spy on users.</p>
<p>The typical approach to computer security is to fix individual software vulnerabilities to keep hackers out. For these patch-based techniques to succeed, programmers must write perfect software without any bugs. But ask any programmer, and the idea of creating a perfect program is laughable. Bugs are everywhere, and security bugs are the most difficult to find because they don’t impair a program’s normal operation. </p>
<p>Morpheus takes a distinct approach to security by augmenting the underlying processor to prevent attackers from grafting malware onto the device. With this approach, Morpheus protects any vulnerable software that runs on it. </p>
<h2>What other research is being done</h2>
<p>For the longest time, processor designers considered security a problem for software programmers, since programmers made the software bugs that lead to security concerns. But recently computer designers have discovered that hardware can help protect software. </p>
<p>Academic efforts, such as <a href="https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/">Capability Hardware Enhanced RISC Instructions</a> at the University of Cambridge, have demonstrated strong protection against memory bugs. Commercial efforts have begun as well, such as Intel’s soon-to-be-released <a href="https://newsroom.intel.com/editorials/intel-cet-answers-call-protect-common-malware-threats/">Control-flow Enforcement Technology</a>. </p>
<p>Morpheus takes a notably different approach of ignoring the bugs and instead randomizes its internal implementation to thwart exploitation of bugs. Fortunately, these are complementary techniques, and combining them will likely make systems even more difficult to attack.</p>
<h2>What’s next</h2>
<p>We are looking at how the fundamental design aspects of Morpheus can be applied to protect sensitive data on people’s devices and in the cloud. In addition to randomizing the implementation details of a system, how can we randomize data in a way that maintains privacy while not being a burden to software programmers?</p>
<p>[<em>Research into coronavirus and other news from science</em> <a href="https://theconversation.com/us/newsletters/science-editors-picks-71/?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=science-corona-research">Subscribe to The Conversation’s new science newsletter</a>.]</p><img src="https://counter.theconversation.com/content/159990/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Todd Austin receives funding from DARPA, which supported the development of the Morpheus secure CPU through DARPA Contract HR0011-18-C-0019. He owns shares in Agita Labs, which is commercializing a derivative of the Morpheus technology. </span></em></p><p class="fine-print"><em><span>Lauren Biernacki receives funding from DARPA, which supported the development of the Morpheus secure CPU through DARPA Contract HR0011-18-C-0019.</span></em></p>Most computer security focuses on software, but computer processors are vulnerable to hackers, too. An experimental secure processor changes its underlying structure before hackers can figure it out.Todd Austin, Professor of Electrical Engineering and Computer Science, University of MichiganLauren Biernacki, Ph.D. Candidate in Computer Science & Engineering, University of MichiganLicensed as Creative Commons – attribution, no derivatives.