tag:theconversation.com,2011:/us/topics/machine-learning-8332/articlesMachine learning – The Conversation2024-03-21T18:58:32Ztag:theconversation.com,2011:article/2195942024-03-21T18:58:32Z2024-03-21T18:58:32ZTreatments tailored to you: how AI will change NZ healthcare, and what we have to get right first<p>Imagine this: a novel virus is rapidly breaking out nationwide, resulting in an epidemic. The government introduces vaccination mandates and a choice of different vaccines is available. </p>
<p>But not everyone is getting the same vaccine. When you sign up for vaccination, you are sent a vial with instructions to send a sample of your saliva to the nearest laboratory. Just a few hours later you receive a message telling you which vaccine you should get. Your neighbour also signed up for vaccination. But their vaccine is different from yours. </p>
<p>Both of you are now vaccinated and protected, although each of you received your vaccines depending on “who you are”. Your genetics, age, gender, and myriad of other factors are captured in a “model” that predicts and determines the best option to protect you from the virus.</p>
<p>It all sounds a bit like science fiction. But since the <a href="https://www.genome.gov/human-genome-project">decoding of the human genome in 2003</a>, we have entered the age of precision prevention. </p>
<p>New Zealand has a long-standing newborn screening programme. This includes <a href="https://www.auckland.ac.nz/en/news/2023/11/30/newborn-genomic-sequencing-sick-babies.html">genome sequencing machines available nationwide</a> and a <a href="https://www.tewhatuora.govt.nz/our-health-system/genetic-health-service-nz/about/">genetic health service</a>. Programmes such as these open up the possibilities of public health genomics and precision public health for everyone.</p>
<p>The further expansion of these programmes, as well as the expansion of the use of artificial intelligence and machine learning to enable a shift to more personalised preventive care, will change how public health care is delivered.</p>
<p>At the same time, these developments raise wider concerns over individual choice versus the greater good, personal privacy, and who is responsible for the protection of New Zealanders and their health information.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1768051677905707508"}"></div></p>
<h2>What is precision prevention?</h2>
<p>Think of precision prevention (also known as personalised prevention) as public health action tailored to the individual rather than broader groups of society. </p>
<p>This targeted healthcare is achieved by balancing a range of variables (including your genes, life history and environment) with your risks (including everything that changes within you as you grow older). </p>
<p>While advances in genomics are making precision prevention possible, machine learning algorithms fuelled by our personal data have made it closer to a reality. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/its-2030-and-precision-medicine-has-changed-health-care-this-is-what-it-looks-like-90539">It's 2030, and precision medicine has changed health care – this is what it looks like</a>
</strong>
</em>
</p>
<hr>
<p>We generate data about ourselves every day – via social media, smartwatches and other wearable devices – helping to train algorithms to match medical prevention measures with individuals. </p>
<p>Combine all of these with AI-driven predictive modelling, and you have a system that can predict the current and future state of your health with an eerie level of accuracy, and <a href="https://www.forbes.com/sites/forbestechcouncil/2022/01/25/how-ai-could-predict-medical-conditions-and-revive-the-healthcare-system/?sh=362288726c47">help you take steps to prevent disease</a>. </p>
<h2>Safety and delay</h2>
<p>The Prime Minister’s Chief Science Advisor recently <a href="https://www.pmcsa.ac.nz/artificial-intelligence-2/ai-in-healthcare/">published a report</a> mapping out the landscape of artificial intelligence and machine learning in New Zealand over the next five years. </p>
<p>While the report authors didn’t specifically reference “precision prevention”, they did include examples of this approach, such as <a href="https://edition.cnn.com/2023/08/01/health/ai-breast-cancer-detection/index.html">computer vision augmented mammography</a>. </p>
<p>But as the report suggests, adoption tends to fall behind the pace of innovation in AI. Te Whatu Ora–Health New Zealand has also <a href="https://www.tewhatuora.govt.nz/our-health-system/digital-health/national-ai-and-algorithm-expert-advisory-group-naiaeag-te-whatu-ora-advice-on-the-use-of-large-language-models-and-generative-ai-in-healthcare/">not approved</a> emerging large language models and generative artificial intelligence tools as safe and effective for use in healthcare. </p>
<p>This means generative AI-driven precision prevention practices, such as conversational AI for public health messaging, may have to wait before they can be deemed safe to use. </p>
<h2>Move forward with caution</h2>
<p>There is much to be excited about the prospects of the use of artificial intelligence and machine learning in ushering in a new age of precision prevention and preventive health. But at the same time, we must temper this with caution. </p>
<p>Artificial intelligence and machine learning may increase access and utilisation of healthcare by lowering barriers to medical knowledge and reducing human bias. But government and medical agencies need to reduce barriers related to digital literacy and access to online platforms.</p>
<p>For those with limited access to online resources or who have limited digital literacy, the already existent inequity of access to care and health could worsen. </p>
<p>Artificial intelligence also has a <a href="https://arstechnica.com/gadgets/2023/04/generative-ai-is-cool-but-lets-not-forget-its-human-and-environmental-costs/">significant environmental impact</a>. <a href="https://arxiv.org/pdf/1906.02243.pdf">One study</a> found several common large AI models can emit over 270,000 tonnes of carbon dioxide during their life cycle.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/these-scientists-are-using-dna-to-target-new-drugs-for-your-genes-medicine-made-for-you-part-1-131986">These scientists are using DNA to target new drugs for your genes – Medicine made for you part 1</a>
</strong>
</em>
</p>
<hr>
<p>Finally, technology is a shifting landscape. Proponents of precision healthcare must be careful with children and marginalised communities and their access to resources. Maintaining privacy and choice is essential – everyone should be in a position to control what they share with the AI agents. </p>
<p>In the end, each of us is different, and we all have our different needs for our health and for our lives. Moving more people to preventive care through precision healthcare will reduce the financial burden on the health system. </p>
<p>But as the report from the prime minister’s chief science officer emphasises, machine learning algorithms are a nascent field. We need more public education and awareness before the technology becomes part of our everyday lives.</p><img src="https://counter.theconversation.com/content/219594/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Arindam Basu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>As New Zealand readies itself for AI-assisted medical treatment targeted to individuals, officials need to ensure the benefits outweigh the risks.Arindam Basu, Associate Professor, Epidemiology and Environmental Health, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2258942024-03-19T19:44:12Z2024-03-19T19:44:12ZCan AI improve football teams’ success from corner kicks? Liverpool and others are betting it can<figure><img src="https://images.theconversation.com/files/582686/original/file-20240318-26-ut2che.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4000%2C2005&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Google DeepMind</span></span></figcaption></figure><p>Last Sunday, Liverpool faced Manchester United in the <a href="https://www.espn.com.au/football/report/_/gameId/699283">quarter finals of the FA Cup</a> – and in the final minute of extra time, with the score tied at three-all, Liverpool had the crucial opportunity of a corner kick. A goal would surely mean victory, but losing possession could be risky.</p>
<p>What was Liverpool to do? Attack or play it safe? And if they were to attack, how best to do it? What kind of delivery, and where should players be waiting to attack the ball?</p>
<p>Set-piece decisions like this are vital not only in football but in many other competitive sports, and traditionally they are made by coaches on the basis of long experience and analysis. However, Liverpool has recently been looking to an unexpected source for advice: researchers at the Google-owned UK-based artificial intelligence (AI) lab <a href="https://deepmind.google/discover/blog/advancing-sports-analytics-through-ai-research/">DeepMind</a>.</p>
<p>In a <a href="https://www.nature.com/articles/s41467-024-45965-x">paper published today</a> in Nature Communications, DeepMind researchers describe an AI system for football tactics called TacticAI, which can assist in developing successful corner kick routines. The paper says experts at Liverpool favoured TacticAI’s advice over existing tactics in 90% of cases.</p>
<h2>What TacticAI can do</h2>
<p>At a corner kick, play stops and each team has the chance to organise its players on the field before the attacking team kicks the ball back into play – usually with a specific prearranged plan in mind that will (hopefully) let them score a goal. Advice on these prearranged plans or routines is what TacticAI sets out to offer.</p>
<p>The package has three components: one that predicts which player is most likely to receive the ball in a given scenario, another that predicts whether a shot on goal will be taken, and a third that recommends how to adjust the position of players to increase or decrease the chances of a shot on goal.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/582707/original/file-20240319-28-xag9u9.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A diagram showing a soccer field with player positions marked, as well as a network diagram." src="https://images.theconversation.com/files/582707/original/file-20240319-28-xag9u9.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/582707/original/file-20240319-28-xag9u9.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=258&fit=crop&dpr=1 600w, https://images.theconversation.com/files/582707/original/file-20240319-28-xag9u9.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=258&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/582707/original/file-20240319-28-xag9u9.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=258&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/582707/original/file-20240319-28-xag9u9.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=325&fit=crop&dpr=1 754w, https://images.theconversation.com/files/582707/original/file-20240319-28-xag9u9.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=325&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/582707/original/file-20240319-28-xag9u9.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=325&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">TacticAI represents a corner-kick setup as a ‘graph’ of player positions and relationships, which it then uses to make predictions.</span>
<span class="attribution"><a class="source" href="https://doi.org/10.1038/s41467-024-45965-x">Wang et al. / Nature Communications</a></span>
</figcaption>
</figure>
<p>Trained on a dataset of 7,176 corner kicks from Premier League matches, TacticAI used a technique called “geometric deep learning” to identify key strategic patterns.</p>
<p>The researchers say this approach could be applied not only to football, but to any sport in which a stoppage in the game allows teams to deliberately manoeuvre players into place unopposed, and plan the next sequence of play. In football, it could also be expanded in future to incorporate throw-in routines as well as other set pieces such as attacking free kicks.</p>
<h2>Vast amounts of data</h2>
<p>AI in football is not new. Even in amateur and semi-professional football, AI-powered auto-tracking camera systems are becoming commonplace, for example. At the last men’s and women’s World Cups in 2022 and 2023, AI in conjunction with advanced ball-tracking technology produced semi-automated offside decisions with an unprecedented level of accuracy.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/games-by-numbers-machine-learning-is-changing-sport-38973">Games by numbers: machine learning is changing sport</a>
</strong>
</em>
</p>
<hr>
<p>Professional football clubs have analytical departments using AI at every level of the game, predominantly in the areas of <a href="https://www.wired.com/story/ai-football-soccer-scouting/">scouting</a>, <a href="https://www.engadget.com/will-ai-revolutionize-professional-soccer-recruitment-130045118.html">recruitment</a> and <a href="https://theathletic.com/4966509/2023/10/19/wearable-technology-in-football/">athlete monitoring</a>. Other research has also tried to <a href="https://www.mdpi.com/1424-8220/23/9/4506">predict players’ shots on goal</a>, or guess from a video what <a href="https://www.nature.com/articles/s41598-022-12547-0">off-screen players are doing</a>. </p>
<p>Bringing AI into tactical decisions promises to offer coaches a more objective and analytical approach to the game. Algorithms can process vast amounts of data, identifying patterns that may not be apparent to the naked eye, giving teams valuable insights into their own performance as well as that of their opponents. </p>
<h2>A useful tool</h2>
<p>AI may be a useful tool, but it cannot make decisions about match play alone. An algorithm might suggest the optimal positional setup for an in-swinging corner or how best to exploit the opposition’s defensive tactics. </p>
<p>What AI cannot do is make decisions on the fly – like deciding whether to take a corner quickly to exploit an opponent’s lapse in concentration. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/b1zjjf5EN1g?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Sometimes the best move is a speedy reaction to conditions on the ground, not an elaborate prearranged set play.</span></figcaption>
</figure>
<p>There’s also something to be said for allowing players creative licence in some situations. Once teams are using AI to suggest the optimal corner strategy, opponents will doubtless counter with their own AI-prompted defensive setup.</p>
<p>So while the tech behind TacticAI is very interesting, it remains to be seen whether it can evolve to be useful in open play. Could AI get to the stage where it can recognise the best tactical player substitution in a given situation? </p>
<p>DeepMind researchers have advanced decision-making like this in their sights for <a href="https://dl.acm.org/doi/10.1613/jair.1.12505">future research</a>, but will it ever reach a point where coaches would trust it?</p>
<p>My sense from discussions with people in the industry is many believe AI should only be used as an input to decision-making, and not be allowed to make decisions itself. There is no substitute for the experience and instinct of the best coaches, the intangible ability to feel what the game needs, to make a change in formation, to play someone out of position. </p>
<h2>Smart tactics – but what about strategy?</h2>
<p>Coming back to that crucial Liverpool corner in last Sunday’s FA Cup quarter final: we don’t know whether Liverpool’s manager Jürgen Klopp considered AI advice, but the decision was made to play an attacking corner kick, presumably in the hope of scoring a last-minute winner. </p>
<p>The out-swinging delivery into the box may well have been the tactic with the highest probability of scoring a goal – but things rapidly went wrong. Manchester United gained possession of the ball, moved it down the pitch on the counterattack and slotted home the winning goal, sending Liverpool out of the tournament at the last moment.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/DKk8N2PYwCA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Even the best tactics can go wrong.</span></figcaption>
</figure>
<p>So while AI might suggest the optimal delivery and setup for a set piece, a coach might decide the wiser move is to play safe and avoid the risk of a counterattack. If TacticAI continues its career progression as a coaching assistant, it will no doubt learn that keeping the ball in the corner and playing for penalties may sometimes be the better option.</p><img src="https://counter.theconversation.com/content/225894/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mark Scanlan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A new AI system may improve soccer tactics in 90% of corner kicks – but is it ready for the big leagues?Mark Scanlan, Lecturer, Edith Cowan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2243342024-03-12T18:53:02Z2024-03-12T18:53:02ZAncient scrolls are being ‘read’ by machine learning – with human knowledge to detect language and make sense of them<figure><img src="https://images.theconversation.com/files/580263/original/file-20240306-30-3x4aw.png?ixlib=rb-1.1.0&rect=1040%2C0%2C1253%2C379&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The Vesuvius Challenge incentivizes technological development by inviting researchers to figure out how to ‘read’ ancient papyri excavated from volcanic ash of Mount Vesuvius in Italy. Columns of Greek text retrieved from a portion of a scroll. </span> <span class="attribution"><span class="source">(Vesuvius Challenge)</span>, <span class="license">Author provided</span></span></figcaption></figure><p>A groundbreaking announcement for the recovery of lost ancient literature was recently made. Using a non-invasive method that harnesses <a href="https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained">machine learning</a>, an international trio of scholars retrieved 15 columns of ancient Greek text from within a carbonized papyrus from <a href="https://www.herculaneum.ox.ac.uk/about-us/story-of-herculaneum">Herculaneum</a>, a seaside Roman town eight kilometres southeast of Naples, Italy.</p>
<p>Their achievement earned them a US$700,000 grand prize from the <a href="https://scrollprize.org/">Vesuvius Challenge</a>. The challenge sought to incentivize technological development by inviting public participation in the research. </p>
<p>It emerged from collaboration between computer scientist Brent Seales — who has <a href="https://doi.org/10.48550/arXiv.2304.02084">a long-standing interest</a> in non-invasive <a href="https://www2.cs.uky.edu/dri/the-scroll-from-en-gedi">technologies for studying</a> manuscripts — and technology investors Nat Friedman and Daniel Gross. </p>
<p>While the developments are exciting, technology is only part of the progress of scholarship. The work of reading and analyzing the new Greek and Latin texts recovered from the papyri will fall to human beings.</p>
<figure class="align-center ">
<img alt="Painting showing a mountain with a volcano erupting." src="https://images.theconversation.com/files/580261/original/file-20240306-28-umf4ff.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/580261/original/file-20240306-28-umf4ff.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=404&fit=crop&dpr=1 600w, https://images.theconversation.com/files/580261/original/file-20240306-28-umf4ff.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=404&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/580261/original/file-20240306-28-umf4ff.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=404&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/580261/original/file-20240306-28-umf4ff.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=507&fit=crop&dpr=1 754w, https://images.theconversation.com/files/580261/original/file-20240306-28-umf4ff.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=507&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/580261/original/file-20240306-28-umf4ff.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=507&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">‘An Eruption of Vesuvius,’ by Johan Christian Dahl (1824).</span>
<span class="attribution"><span class="source">(The Metropolitan Museum of Art)</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<h2>Buried in ash</h2>
<p>Like Pompeii, <a href="https://www.youtube.com/watch?v=f5b8igA644o">Herculaneum</a> was buried by the catastrophic eruption of Mount Vesuvius in 79 CE. </p>
<p>Much of the ancient town remains underground. But <a href="https://theconversation.com/ai-will-let-us-read-lost-ancient-works-in-the-library-at-herculaneum-for-the-first-time-223583">in 1752</a>, excavation uncovered hundreds of papyrus scrolls in the library of an elaborate Roman villa. The Herculaneum papyri <a href="https://www.herculaneum.ox.ac.uk/research-and-publications/papyri">are the largest surviving example of an</a> intact ancient library preserved in the archaeological record: the library was found as it actually existed in 79 CE. </p>
<p>The precise number of books is unknown, says Michael McOsker, a research fellow in papyrology at University College London, and different methods of estimating give different results. </p>
<h2>Carbonized papyri</h2>
<p>Starved of oxygen, the intense heat of Vesuvius’ <a href="https://www.nationalgeographic.org/encyclopedia/pyroclastic-flow/">pyroclastic flow</a> carbonized (but did not ignite) the papyri. Resembling lumps of coal to the eye, 18th-century excavators did not immediately recognize them as ancient books.</p>
<figure class="align-center ">
<img alt="Three dark grey rectangular objects seen in a box." src="https://images.theconversation.com/files/578769/original/file-20240228-16-sc89zf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/578769/original/file-20240228-16-sc89zf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/578769/original/file-20240228-16-sc89zf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/578769/original/file-20240228-16-sc89zf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/578769/original/file-20240228-16-sc89zf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/578769/original/file-20240228-16-sc89zf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/578769/original/file-20240228-16-sc89zf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Three unopened papyri from Herculaneum.</span>
<span class="attribution"><span class="source">(Bodleian Libraries/University of Oxford)</span>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>The papyri are so brittle that many were destroyed by early attempts to access their texts. Studying them has therefore always required ingenuity. In 1754, a <a href="https://www.smithsonianmag.com/history/buried-ash-vesuvius-scrolls-are-being-read-new-xray-technique-180969358">conservator and priest at the Vatican library</a> devised a machine for slowly unrolling them. </p>
<figure class="align-center ">
<img alt="A dark grey scroll." src="https://images.theconversation.com/files/578666/original/file-20240228-7839-doqnyj.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C7027%2C4995&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/578666/original/file-20240228-7839-doqnyj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=425&fit=crop&dpr=1 600w, https://images.theconversation.com/files/578666/original/file-20240228-7839-doqnyj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=425&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/578666/original/file-20240228-7839-doqnyj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=425&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/578666/original/file-20240228-7839-doqnyj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=534&fit=crop&dpr=1 754w, https://images.theconversation.com/files/578666/original/file-20240228-7839-doqnyj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=534&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/578666/original/file-20240228-7839-doqnyj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=534&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A portion of an unrolled Herculaneum papyrus.</span>
<span class="attribution"><a class="source" href="https://digital.bodleian.ox.ac.uk/objects/cac4db6a-8af5-4234-%20acb8-4b1ce819ef14">(Bodleian Libraries/University of Oxford)</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>More recently, <a href="https://www.imaging.org/common/uploaded%20files/pdfs/Papers/2001/PICS-0-251/4625.pdf">multispectral photography</a> has dramatically improved their legibility. But until now, a non-invasive method that would leave the scrolls intact remained out of reach. Its development marks a significant breakthrough.</p>
<p>McOsker notes there are 659 items in the catalogue listed as “not unrolled,” but some of these are parts of scrolls. </p>
<h2>Sparking innovation</h2>
<p>To kick-start the challenge, Seales <a href="https://scrollprize.org/data">made public</a> an array of high-resolution X-ray computed tomography (CT) scans of two scrolls as well as similar scans of detached fragments with visible ink. The latter are essential as a reference point (or “control”) for innovative approaches. </p>
<p>The competition’s design encouraged transparency and collaboration: data published in the pursuit <a href="https://scrollprize.org/winners">of smaller goals</a> benefited all competitors. Additionally, transparency enabled the independent verification of results. Teams coalesced around shared ideas and approaches to the problem.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-will-let-us-read-lost-ancient-works-in-the-library-at-herculaneum-for-the-first-time-223583">AI will let us read 'lost' ancient works in the library at Herculaneum for the first time</a>
</strong>
</em>
</p>
<hr>
<h2>Text mentions music, taste, sight</h2>
<p>The challenge made news in <a href="https://scrollprize.org/firstletters">October</a>, when the first letters were read: πορφυρας (a noun or adjective involving “purple”). </p>
<p>By the end of 2023, the criteria for awarding the grand prize were met: four passages of 140 characters, with 85 per cent of the letters recovered. <a href="https://scrollprize.org/grandprize">A PhD student studying machine learning, an engineer studying computer science and a robotics student</a> were declared
the victors.</p>
<p>According to McOsker, the text they retrieved mentions music twice, as well as the senses of taste and sight. He thinks it is likely a work about sensation and decision-making, in the tradition of <a href="https://plato.stanford.edu/ENTRIES/epicurus/">the philosopher Epicurus (341–270 BCE)</a>. The challenge’s papyrological team is still analyzing it.</p>
<h2>Hundreds of rolls to be studied</h2>
<p>This year brings with it new goals: after five per cent of one scroll was read in 2023, the challenge set a <a href="https://scrollprize.org/2024_prizes#2024-grand-prize">2024 grand prize goal</a> of reading 90 per cent of four scrolls. With hundreds of rolls yet to be studied, the new method of recovering the contents of the Herculaneum papyri is only getting started.</p>
<p>But several obstacles remain. The production of scans at sufficiently high resolution can’t be done via ordinary equipment, but requires access to a facility with a particle accelerator. Access to the right equipment is limited and costly. To date, four scrolls and numerous detached fragments <a href="https://www.diamond.ac.uk">have been processed at a facility</a> near Oxford, England. </p>
<p>Most of the unopened scrolls are housed in Naples, and getting them safely to a facility will be complicated, as will reserving and paying for the beam time required to scan them.</p>
<p>Another limitation is that the technology for unrolling and flattening out a papyrus by virtual means — a process the challenge calls “segmentation” — is slow and expensive. Via current techniques, which involve a fair bit of manual manipulation, fully segmenting one scroll would cost US$1–5 million. Segmentation needs to become much more efficient to avoid a bottleneck.</p>
<h2>Critical minds needed</h2>
<p>Technology is only part of the equation. Essential to the challenge’s work is an international team of papyrologists. Their role is to analyze the model’s output of legible ancient Greek — and in so doing determine which approaches are most effective.</p>
<p>Papyrology is thrilling work, but also challenging and painstaking. It requires mastery of ancient languages and ideas as well as the puzzle-solver’s ability to fill in the inevitable gaps. Papyrology is a niche specialization: in the larger world of classics, papyrologists are rare birds. The number of Herculaneum specialists is even fewer. </p>
<p>For the challenge truly to succeed, we’re going to need critical minds as well as whizbang technology. There’s potentially a fair bit of new ancient philosophy headed our way, but it needs to be pieced together into a coherent text — letter by letter, word by word, sentence by sentence — before it can be studied more widely. That’s going to require scholars.</p><img src="https://counter.theconversation.com/content/224334/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>C. Michael Sampson receives funding from the Social Sciences and Humanities Research Council of Canada for 'the Books of Karanis,' a project that studies fragmentary Greek literature from the Egyptian village Karanis. </span></em></p>However exciting the technological developments may be, the task of reading and analyzing the Greek and Latin texts recovered from the papyri will fall to human beings.C. Michael Sampson, Associate Professor of Classics, University of ManitobaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2246072024-03-06T19:07:52Z2024-03-06T19:07:52ZSharks, turtles and other sea creatures face greater risk from industrial fishing than previously thought − we estimated added pressure from ‘dark’ fishing vessels<figure><img src="https://images.theconversation.com/files/580177/original/file-20240306-22-3smwaf.jpg?ixlib=rb-1.1.0&rect=17%2C0%2C2977%2C1998&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Seabirds like this sooty shearwater can drown when they become tangled in drift nets and other fishing gear. </span> <span class="attribution"><a class="source" href="https://flic.kr/p/dj3H6v"> Roy Lowe, USFWS/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>My colleagues and I mapped activity in the northeast Pacific of “dark” fishing vessels – boats that turn off their location devices or lose signal for technical reasons. In <a href="https://www.science.org/doi/10.1126/sciadv.adl5528">our new study</a>, we found that highly mobile marine predators, such as sea lions, sharks and leatherback sea turtles, are significantly <a href="https://news.stanford.edu/2019/03/13/tunas-sharks-ships-sea/">more threatened than previously thought</a> because of large numbers of dark fishing vessels operating where these species live. </p>
<p>While we couldn’t directly watch the activities of each of these dark vessels, <a href="https://www.theguardian.com/environment/2022/nov/02/at-least-6-percent-global-fishing-likely-as-ships-turn-off-tracking-devices-study">new technological advances</a>, including satellite data and machine learning, make it possible to estimate where they go when they are not broadcasting their locations. </p>
<p>Examining five years of data from fishing vessel location devices and the habitats of 14 large marine species, including seabirds, sharks, turtles, sea lions and tunas, we found that our estimates of risk to these animals increased by nearly 25% when we accounted for the presence of dark vessels. For some individual predators, such as albacore and bluefin tunas, this adjustment increased risk by over 36%. The main hot spots were in the Bering Sea and along the Pacific coast of North America. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/bjFSgr_B38I?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Bycatch, or accidental take, is the leading threat to some endangered marine species.</span></figcaption>
</figure>
<h2>How we did our work</h2>
<p>Fishing boats use <a href="https://globalfishingwatch.org/faqs/what-is-ais/">Automatic Identification System</a>, or AIS, to avoid colliding with each other. Their AIS signals bounce off satellites to reach nearby ships. </p>
<p>This data is a valuable tool for <a href="https://www.cbsnews.com/detroit/news/study-choosing-fish-may-be-killing-sharks/">mapping risk at sea</a> and <a href="https://www.bbc.com/news/science-environment-43169824">understanding the footprints of fishing fleets</a>. AIS data captures an estimated <a href="http://dx.doi.org/10.1126/science.aao564">50% to 80%</a> of fishing operations occurring more than 100 nautical miles from shore.</p>
<p>But in some areas, vessels’ AIS signals can’t reach the satellites, either because reception is poor or many boats are crowded together – much as cellphones can have difficulty sending text messages in remote wildness or in crowded stadiums. And just as location tracking can be disabled on phones, fishing vessels can intentionally <a href="https://theconversation.com/when-fishing-boats-go-dark-at-sea-theyre-often-committing-crimes-we-mapped-where-it-happens-196694">disable their AIS</a> if they want to hide their location. Boats that do this may be engaged in criminal activities, such as <a href="https://www.penguinrandomhouse.com/books/538736/the-outlaw-ocean-by-ian-urbina/">illegal fishing or human trafficking</a>.</p>
<p>We calculated how much risk dark vessels pose to marine life by overlapping their activity with the modeled habitats of 14 highly mobile marine predators. Using the same method, we also calculated how much risk observable fishing vessels that broadcast their locations pose to marine life. These two calculations allowed us to understand the additional risk from dark fishing vessels.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/579990/original/file-20240305-26-vf2vcl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A seal on a beach, with a rope wrapped around it and connected to a large orange float beside the animal." src="https://images.theconversation.com/files/579990/original/file-20240305-26-vf2vcl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/579990/original/file-20240305-26-vf2vcl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/579990/original/file-20240305-26-vf2vcl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/579990/original/file-20240305-26-vf2vcl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/579990/original/file-20240305-26-vf2vcl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/579990/original/file-20240305-26-vf2vcl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/579990/original/file-20240305-26-vf2vcl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A Hawaiian monk seal entangled on a large fishing float.</span>
<span class="attribution"><a class="source" href="https://photolib.noaa.gov/Collections/Fisheries/Other/emodule/1054/eitem/61324">Doug Helton, NOAA/NOS/ORR/ERD</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<h2>Why it matters</h2>
<p>We know that many sea creatures, including endangered species, are <a href="https://www.msc.org/en-us/what-we-are-doing/oceans-at-risk/overfishing">killed by overfishing</a>, <a href="https://www.latimes.com/opinion/op-ed/la-oe-welch-sea-turtles-swordfish-climate-change-20190610-story.html">accidental catch</a> and <a href="https://www.fisheries.noaa.gov/west-coast/marine-mammal-protection/west-coast-large-whale-entanglement-response-program">entanglement in fishing gear</a>. More overlap between wildlife and fishing boats means that those harmful impacts are more likely to happen. </p>
<p>Even considering only <a href="https://globalfishingwatch.org/map/index?start=2023-11-25T00%3A00%3A00.000Z&end=2024-02-25T00%3A00%3A00.000Z&latitude=19&longitude=26&zoom=1.5">observable fishing boats broadcasting their positions</a>, the presence of boats signals considerable risk for marine life. For example, <a href="https://www.fisheries.noaa.gov/species/california-sea-lion">California sea lions</a> forage in Pacific coastal waters from the Canadian border to Baja California and are accidentally caught by boats fishing for hake and halibut. We found observable fishing activity in over 45% of the sea lions’ habitat. </p>
<p>In another example, migratory <a href="https://www.adfg.alaska.gov/index.cfm?adfg=salmonshark.main">salmon sharks</a> feed on salmon near Alaska’s Aleutian Islands during the summer and breed in warmer waters off the coasts of Oregon and California during the winter. Along their journey, salmon sharks are accidentally caught in fishing nets and longlines. We detected observable vessel fishing activity in nearly one-third of salmon shark habitat. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/579983/original/file-20240305-28-en5un3.jpg?ixlib=rb-1.1.0&rect=31%2C0%2C5169%2C3461&q=45&auto=format&w=1000&fit=clip"><img alt="Dozens of fishing boats move out of an urban harbor" src="https://images.theconversation.com/files/579983/original/file-20240305-28-en5un3.jpg?ixlib=rb-1.1.0&rect=31%2C0%2C5169%2C3461&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/579983/original/file-20240305-28-en5un3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/579983/original/file-20240305-28-en5un3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/579983/original/file-20240305-28-en5un3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/579983/original/file-20240305-28-en5un3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/579983/original/file-20240305-28-en5un3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/579983/original/file-20240305-28-en5un3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Fishing boats head out for the East China Sea in Zhoushan, Zhejiang Province, China.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/fishing-boats-set-sail-in-the-morning-to-east-china-sea-for-news-photo/1340823231">Shen Lei/VCG via Getty Images</a></span>
</figcaption>
</figure>
<p>Our findings indicate that such threats are higher when dark fishing boats are present. Estimates of risk to California sea lions and salmon sharks increased by 28% and 23%, respectively, when we accounted for dark vessels.</p>
<p>This information could affect fishery regulation. For example, regulators <a href="https://www.fisheries.noaa.gov/feature-story/fish-stock-assessment-101-part-2-closer-look-stock-assessment-models">use risk information</a> to set catch limits for species such as tuna; higher risk could mean that catch limits need to be lower. </p>
<p>For species such as sea lions and salmon sharks that are accidentally caught by fishermen, higher risk levels could indicate that fishing boats should use more selective gear. California is currently acting on this issue by helping fishermen phase out use of <a href="https://opc.ca.gov/2022/11/phase-out-drift-gillnets/">large-mesh drift gill nets</a> in state waters. These nets, which hang like curtains in the water, catch <a href="https://www.fao.org/3/T0502E/T0502E01.htm">many other fishes along with the target species</a>. </p>
<p>Accounting for dark vessels is particularly important in international waters where boats from multiple countries operate, because AIS data is one of the most complete sources of fishing activity across nations. Tracking dark vessels can help make this information as comprehensive as possible and provide insights into the multinational impacts of fishing. </p>
<p>Our study does not account for vessels that do not use any vessel tracking system, or that use systems other than AIS. Therefore, our risk calculations likely still underestimate the true impact of fisheries on marine predators. </p>
<h2>What’s next</h2>
<p>The world’s oceans are rich in life but poor in data, although this is changing. High-resolution <a href="https://www.smithsonianmag.com/smart-news/satellite-maps-reveal-rampant-fishing-untracked-dark-vessels-oceans-180983539/">satellite imagery</a> may soon offer even more information on risk from dark vessels. </p>
<p>President Joe Biden and other global leaders have pledged to protect <a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/03/21/fact-sheet-biden-harris-administration-takes-new-action-to-conserve-and-restore-americas-lands-and-waters/">30% of the ocean by 2030</a>. Better data on human-wildlife interactions at sea can help ensure that new protected areas are in the right places to make a difference.</p><img src="https://counter.theconversation.com/content/224607/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Heather Welch receives funding from NOAA's Office of Law Enforcement. </span></em></p>The toll on wildlife from illegal fishing, bycatch and entanglement in fishing gear is likely underestimated, because it doesn’t account for ‘dark’ fishing vessels, a new study finds.Heather Welch, Researcher in Ecosystem Dynamics, University of California, Santa CruzLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2227002024-02-29T13:39:54Z2024-02-29T13:39:54ZWe’ve been here before: AI promised humanlike machines – in 1958<figure><img src="https://images.theconversation.com/files/578758/original/file-20240228-16-mnuihk.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2048%2C1603&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Frank Rosenblatt with the Mark I Perceptron, the first artificial neural network computer, unveiled in 1958.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/127906254@N06/20897323365/in/photolist-5VsZ1M-5Vjepm-xQCfbH-5WbkWz-5Wdtn4-5WdqXa-f2s3pc">National Museum of the U.S. Navy/Flickr</a></span></figcaption></figure><p>A roomsize computer equipped with a new type of circuitry, the Perceptron, was introduced to the world in 1958 in a <a href="https://www.nytimes.com/1958/07/08/archives/new-navy-device-learns-by-doing-psychologist-shows-embryo-of.html">brief news story</a> buried deep in The New York Times. The story cited the U.S. Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” </p>
<p>More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much. </p>
<p>The field of artificial intelligence has been running through a boom-and-bust cycle since its early days. Now, as the field is in yet another boom, many proponents of the technology seem to have forgotten the failures of the past – and the reasons for them. While optimism drives progress, it’s worth paying attention to the history. </p>
<p>The Perceptron, <a href="https://psycnet.apa.org/doi/10.1037/h0042519">invented by Frank Rosenblatt</a>, arguably laid the <a href="https://news.cornell.edu/stories/2019/09/professors-perceptron-paved-way-ai-60-years-too-soon">foundations for AI</a>. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components together. Modern day artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except with substantially more layers, nodes and connections.</p>
<p>Much like modern-day machine learning, if the Perceptron returned the wrong answer, it would alter its connections so that it could make a better prediction of what comes next the next time around. Familiar modern AI systems work in much the same way. Using a prediction-based format, large language models, or LLMs, are able to produce impressive <a href="https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/">long-form text-based responses</a> and associate images with text to produce <a href="https://www.assemblyai.com/blog/how-dall-e-2-actually-works/">new images based on prompts</a>. These systems get better and better as they interact more with users. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A chart with a horizontal row of nine colored blocks through the center and numerous black vertical lines connecting the blocks with sections of text above and below the blocks" src="https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=565&fit=crop&dpr=1 754w, https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=565&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/576348/original/file-20240219-22-8zapyi.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=565&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A timeline of the history of AI starting in the 1940s. Click the author’s name here for a PDF of this poster.</span>
<span class="attribution"><a class="source" href="https://www.daniellejwilliams.com/_files/ugd/a6ff55_cac7c8efb9404a208c0ecd284ff11ba7.pdf">Danielle J. Williams</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>AI boom and bust</h2>
<p>In the decade or so after Rosenblatt unveiled the Mark I Perceptron, experts like <a href="https://www.nytimes.com/2016/01/26/business/marvin-minsky-pioneer-in-artificial-intelligence-dies-at-88.html">Marvin Minsky</a> claimed that the world would “<a href="https://books.google.com/books?id=2FMEAAAAMBAJ&pg=PA58&dq=In+from+three+to+eight+years+we+will+have+a+machine+with+the+general+intelligence+of+an+average+human+being#v=onepage&q=In%20from%20three%20to%20eight%20years%20we%20will%20have%20a%20machine%20with%20the%20general%20intelligence%20of%20an%20average%20human%20being&f=false">have a machine with the general intelligence of an average human being</a>” by the mid- to late-1970s. But despite some success, humanlike intelligence was nowhere to be found. </p>
<p>It quickly became apparent that the <a href="https://stacks.stanford.edu/file/druid:cn981xh0967/cn981xh0967.pdf">AI systems knew nothing about their subject matter</a>. Without the appropriate background and contextual knowledge, it’s nearly impossible to accurately resolve ambiguities present in everyday language – a task humans perform effortlessly. The first AI “winter,” or period of disillusionment, hit in 1974 following the <a href="https://dougenterprises.com/perceptron-history/">perceived failure of the Perceptron</a>.</p>
<p>However, by 1980, AI was back in business, and the first official AI boom was in full swing. There were new <a href="https://www.britannica.com/technology/expert-system">expert systems</a>, AIs designed to solve problems in specific areas of knowledge, that could identify objects and <a href="https://www.britannica.com/technology/MYCIN">diagnose diseases from observable data</a>. There were programs that could make <a href="https://eric.ed.gov/?id=ED161024">complex inferences from simple stories</a>, the <a href="https://web.stanford.edu/%7Elearnest/sail/oldcart.html">first driverless car</a> was ready to hit the road, and <a href="https://robotsguide.com/robots/wabot">robots that could read and play music</a> were playing for live audiences. </p>
<p>But it wasn’t long before the same problems stifled excitement once again. In 1987, the second AI winter hit. Expert systems were failing because <a href="https://towardsdatascience.com/history-of-the-second-ai-winter-406f18789d45">they couldn’t handle novel information</a>. </p>
<p>The 1990s changed the way experts approached problems in AI. Although the eventual thaw of the second winter didn’t lead to an official boom, AI underwent substantial changes. Researchers were tackling the <a href="https://doi.org/10.1145/97709.97728">problem of knowledge acquisition</a> with <a href="https://www.lightsondata.com/the-history-of-machine-learning/#:%7E:text=In%20the%201990s%20work%20on,learn%E2%80%9D%20%E2%80%94%20from%20the%20results.">data-driven approaches</a> to machine learning that changed how AI acquired knowledge.</p>
<p>This time also marked a return to the neural-network-style perceptron, but this version was far more complex, dynamic and, most importantly, digital. The return to the neural network, along with the invention of the web browser and an increase in computing power, <a href="https://www.analyticsvidhya.com/blog/2020/09/quick-history-neural-networks/">made it easier to collect images, mine for data and distribute datasets for machine learning tasks</a>. </p>
<h2>Familiar refrains</h2>
<p>Fast forward to today and confidence in AI progress has begun once again to echo promises made nearly 60 years ago. The term “<a href="https://www.ibm.com/topics/strong-ai">artificial general intelligence</a>” is used to describe the activities of LLMs like those powering AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine that has intelligence equal to humans, meaning the machine would be self-aware, able to solve problems, learn, plan for the future and possibly be conscious. </p>
<p>Just as Rosenblatt thought his Perceptron was a foundation for a conscious, humanlike machine, so do some contemporary AI theorists about today’s artificial neural networks. In 2023, Microsoft published a paper saying that “<a href="https://doi.org/10.48550/arXiv.2303.12712">GPT-4’s performance is strikingly close to human-level performance</a>.” </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Three men sit in chairs on a stage" src="https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/578773/original/file-20240228-30-172pg4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Executives at big tech companies, including Meta, Google and OpenAI, have set their sights on developing human-level AI.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/APECFutureofAI/3fd286588bd549f196eeed9b3c6919fe/photo?Query=Sam%20Altman&mediaType=photo&sortBy=creationdatetime:desc&dateRange=Anytime&totalCount=164&currentItemNo=38">AP Photo/Eric Risberg</a></span>
</figcaption>
</figure>
<p>But before claiming that LLMs are exhibiting human-level intelligence, it might help to reflect on the cyclical nature of AI progress. Many of the same problems that haunted earlier iterations of AI are still present today. The difference is how those problems manifest. </p>
<p>For example, the knowledge problem persists to this day. ChatGPT continually struggles to respond to <a href="https://blogs.nottingham.ac.uk/makingsciencepublic/2023/10/27/chatgpt-and-its-magical-metaphors/">idioms, metaphors, rhetorical questions and sarcasm</a> – unique forms of language that go beyond grammatical connections and instead require inferring the meaning of the words based on context. </p>
<p>Artificial neural networks can, with impressive accuracy, pick out objects in complex scenes. But give an AI a picture of a school bus lying on its side and it will very confidently <a href="https://arxiv.org/abs/1811.11553">say it’s a snowplow</a> 97% of the time. </p>
<h2>Lessons to heed</h2>
<p>In fact, it turns out that AI is <a href="https://www.nature.com/articles/d41586-019-03013-5">quite easy to fool</a> in ways that humans would immediately identify. I think it’s a consideration worth taking seriously in light of how things have gone in the past.</p>
<p>The AI of today looks quite different than AI once did, but the problems of the past remain. As the saying goes: History may not repeat itself, but it often rhymes.</p><img src="https://counter.theconversation.com/content/222700/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Danielle Williams does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Enthusiasm for the capabilities of artificial intelligence – and claims for the approach of humanlike prowess –has followed a boom-and-bust cycle since the middle of the 20th century.Danielle Williams, Postdoctoral Fellow in Philosophy of Science, Arts & Sciences at Washington University in St. LouisLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2233112024-02-13T19:06:47Z2024-02-13T19:06:47ZAI tools produce dazzling results – but do they really have ‘intelligence’?<p>Sam Altman, chief executive of ChatGPT-maker OpenAI, is reportedly trying to find <a href="https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0">up to US$7 trillion</a> of investment to manufacture the enormous volumes of computer chips he believes the world needs to run artificial intelligence (AI) systems. Altman also recently said <a href="https://www.reuters.com/technology/openai-ceo-altman-says-davos-future-ai-depends-energy-breakthrough-2024-01-16/">the world will need more energy</a> in the AI-saturated future he envisions – so much more that some kind of technological breakthrough like nuclear fusion may be required.</p>
<p>Altman clearly has big plans for his company’s technology, but is the future of AI really this rosy? As a long-time “artificial intelligence” researcher, I have my doubts.</p>
<p>Today’s AI systems – particularly generative AI tools such as ChatGPT – are not truly intelligent. What’s more, there is no evidence they can become so without fundamental changes to the way they work.</p>
<h2>What is AI?</h2>
<p>One definition of AI is a computer system that can “<a href="https://www.britannica.com/technology/artificial-intelligence">perform tasks commonly associated with intelligent beings</a>”. </p>
<p>This definition, like many others, is a little blurry: should we call spreadsheets AI, as they can carry out calculations that once would have been a high-level human task? How about factory robots, which have not only replaced humans but in many instances surpassed us in their ability to perform complex and delicate tasks?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/not-everything-we-call-ai-is-actually-artificial-intelligence-heres-what-you-need-to-know-196732">Not everything we call AI is actually 'artificial intelligence'. Here's what you need to know</a>
</strong>
</em>
</p>
<hr>
<p>While spreadsheets and robots can indeed do things that were once the domain of humans, they do so by following an algorithm – a process or set of rules for approaching a task and working through it.</p>
<p>One thing we can say is that there is no such thing as “an AI” in the sense of a system that can perform a range of intelligent actions in the way a human would. Rather, there are many different AI technologies that can do quite different things.</p>
<h2>Making decisions vs generating outputs</h2>
<p>Perhaps the most important distinction is between “discriminative AI” and “generative AI”. </p>
<p>Discriminative AI helps with making decisions, such as whether a bank should give a loan to a small business, or whether a doctor diagnoses a patient with disease X or disease Y. AI technologies of this kind have existed for decades, and bigger and better ones are <a href="https://www.fastcompany.com/90927119/why-discriminative-ai-will-continue-to-dominate-enterprise-ai-adoption-in-a-world-flooded-with-discussions-on-generative-ai">emerging all the time</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-is-everywhere-including-countless-applications-youve-likely-never-heard-of-222985">AI is everywhere – including countless applications you've likely never heard of</a>
</strong>
</em>
</p>
<hr>
<p>Generative AI systems, on the other hand – ChatGPT, Midjourney and their relatives – generate outputs in response to inputs: in other words, they make things up. In essence, they have been exposed to billions of data points (such as sentences) and use this to guess a likely response to a prompt. The response may often be “true”, depending on the source data, but there are no guarantees. </p>
<p>For generative AI, there is no difference between a “hallucination” – a false response invented by the system – and a response a human would judge as true. This appears to be an inherent defect of the technology, which uses a kind of neural network called a transformer. </p>
<h2>AI, but not intelligent</h2>
<p>Another example shows how the goalposts of “AI” are constantly moving. In the 1980s, I worked on a computer system designed to provide expert medical advice on laboratory results. It was written up in the US research literature as <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-0394.1986.tb00192.x">one of the first four</a> medical “expert systems” in clinical use, and in 1986 an Australian government report described it as the most successful expert system developed in Australia. </p>
<p>I was pretty proud of this. It was an AI landmark, and it performed a task that normally required highly trained medical specialists. However, the system wasn’t intelligent at all. It was really just a kind of look-up table which matched lab test results to high-level diagnostic and patient management advice. </p>
<p>There is now technology which makes it very easy to build such systems, so there are thousands of them in use around the world. (This technology, based on research by myself and colleagues, is provided by an Australian company called Beamtree.)</p>
<p>In doing a task done by highly trained specialists, they are certainly “AI”, but they are still not at all intelligent (although the more complex ones may have thousands and thousands of rules for looking up answers).</p>
<p>The transformer networks used in generative AI systems still run on sets of rules, though there may be millions or billions of them, and they cannot easily be explained in human terms. </p>
<h2>What is real intelligence?</h2>
<p>If algorithms can produce dazzling results of the kind we see from ChatGPT without being intelligent, what is real intelligence?</p>
<p>We might say intelligence is insight: the judgement that something is or is not a good idea. Think of Archimedes, leaping from his bath and shouting “Eureka” because he had had an insight into the principle of buoyancy.</p>
<p>Generative AI doesn’t have insight. ChatGPT can’t tell you if its answer to a question is better than Gemini’s. (Gemini, until recently known as Bard, is Google’s competitor to OpenAI’s GPT family of AI tools.)</p>
<p>Or to put it another way: generative AI might produce amazing pictures in the style of Monet, but if it were trained only on Renaissance art it would never invent Impressionism.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="an Impressionist painting of water lilies on a pond." src="https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=534&fit=crop&dpr=1 600w, https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=534&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=534&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=671&fit=crop&dpr=1 754w, https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=671&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=671&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Nympheas (Waterlilies)</span>
<span class="attribution"><a class="source" href="https://artsandculture.google.com/asset/0gEk3X6Bn40QKg">Claude Monet / Google Art Project</a></span>
</figcaption>
</figure>
<p>Generative AI is extraordinary, and people will no doubt find widespread and very valuable uses for it. Already, it provides extremely useful tools for transforming and presenting (but not discovering) information, and tools for turning specifications into code are already in routine use. </p>
<p>These will get better and better: Google’s just-released Gemini, for example, appears to try to <a href="https://fortune.com/2023/12/07/google-launches-deepmind-ai-gemini-chatgpt-openai-factuality-hallucination/">minimise the hallucination problem</a>, by using search and then re-expressing the search results. </p>
<p>Nevertheless, as we become more familiar with generative AI systems, we will see more clearly that it is not truly intelligent; there is no insight. It is not magic, but a very clever magician’s trick: an algorithm that is the product of extraordinary human ingenuity.</p><img src="https://counter.theconversation.com/content/223311/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Paul Compton was a founder of Pacific Knowledge Systems, later renamed Beamtree, but no longer has any involvement
with the company.</span></em></p>Existing AI systems learn patterns from very large piles of data – but they have no insight.Paul Compton, Emeritus professor in Computer Science and Engineering, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2229852024-02-12T01:33:48Z2024-02-12T01:33:48ZAI is everywhere – including countless applications you’ve likely never heard of<figure><img src="https://images.theconversation.com/files/574806/original/file-20240211-28-c36zto.jpg?ixlib=rb-1.1.0&rect=74%2C86%2C3696%2C2723&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/red-and-black-abstract-illustration-aQYgUYwnCsM">Michael Dziedzic/Unsplash</a></span></figcaption></figure><p>Artificial intelligence (AI) is seemingly everywhere. Right now, generative AI in particular – tools like Midjourney, ChatGPT, Gemini (previously Bard) and others – is at the peak of hype.</p>
<p>But as <a href="https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/">an academic discipline</a>, AI has been around for much longer than just the last couple of years. When it comes to real-world applications, many have stayed hidden or relatively unknown. These AI tools are much less glossy than fantasy-image generators – yet they are also ubiquitous.</p>
<p>As various AI technologies continue to progress, we’ll only see an increase of AI use in various industries. This includes healthcare and consumer tech, but also more concerning uses, such as warfare. Here’s a rundown of some of the wide-ranging AI applications you may be less familiar with.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/2023-was-the-year-of-generative-ai-what-can-we-expect-in-2024-219808">2023 was the year of generative AI. What can we expect in 2024?</a>
</strong>
</em>
</p>
<hr>
<h2>AI in healthcare</h2>
<p>Various AI systems are already being used in the health field, both to improve patient outcomes and to advance health research.</p>
<p>One of the strengths of computer programs powered by artificial intelligence is their ability to sift through and <a href="https://www.eiopa.europa.eu/browse/digitalisation-and-financial-innovation/artificial-intelligence-and-big-data_en">analyse truly enormous data sets</a> in a fraction of the time it would take a human – or even a team of humans – to accomplish.</p>
<p>For example, AI is helping researchers <a href="https://scopeblog.stanford.edu/2022/06/10/using-ai-to-find-disease-causing-genes/">comb through vast genetic data libraries</a>. By analysing large data sets, geneticists can home in on genes that could contribute to various diseases, which in turn will help develop <a href="https://theconversation.com/how-can-doctors-use-technology-to-help-them-diagnose-64555">new diagnostic tests</a>.</p>
<p>AI is also helping to speed up the search for medical treatments. Selecting and testing treatments for a particular disease can take ages, so leveraging AI’s ability to comb through data can be helpful here, too.</p>
<p>For example, United States-based non-profit Every Cure is using AI algorithms to search through medical databases <a href="https://www.labiotech.eu/trends-news/ai-discovers-existing-drug-rare-disease/">to match up existing medications</a> with illnesses they might potentially work for. This approach promises to save significant time and resources.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/artificial-intelligence-is-already-in-our-hospitals-5-questions-people-want-answered-217374">Artificial intelligence is already in our hospitals. 5 questions people want answered</a>
</strong>
</em>
</p>
<hr>
<h2>The hidden AIs</h2>
<p>Outside of medical research, other fields not directly related to computer science are also benefiting from AI.</p>
<p>At CERN, home of the Large Hadron Collider, a <a href="https://www.staffs.ac.uk/news/2022/09/new-research-will-assist-further-exploration-of-the-universe">recently developed advanced AI algorithm</a> is helping physicists tackle some of the <a href="https://atlas.cern/node">most challenging aspects</a> of analysing the particle data generated in their experiments. </p>
<p>Last year, astronomers used an AI algorithm for the first time to <a href="https://www.space.com/ai-finds-first-potentially-dangerous-asteroid">identify a “potentially hazardous” asteroid</a> – a space rock that might one day collide with Earth. This algorithm will be a core part of the operations of the Vera C. Rubin Observatory currently under construction in Chile.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-dangerous-asteroids-heading-to-earth-are-so-hard-to-detect-113845">Why dangerous asteroids heading to Earth are so hard to detect</a>
</strong>
</em>
</p>
<hr>
<p>One major area of our lives that uses largely “hidden” AI <a href="https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/635609/EPRS_BRI(2019)635609_EN.pdf">is transportation</a>. Millions of flights and train trips are coordinated by AI all over the world. These AI systems are meant to optimise schedules to reduce costs and maximise efficiency.</p>
<p>Artificial intelligence can also manage real-time road traffic by <a href="https://today.usc.edu/usc-engineers-use-artificial-intelligence-to-reduce-traffic-jams/">analysing traffic patterns</a>, volume and other factors, and then adjusting traffic lights and signals accordingly. <a href="https://www.forbes.com/sites/ariannajohnson/2023/04/14/youre-already-using-ai-heres-where-its-at-in-everyday-life-from-facial-recognition-to-navigation-apps/?sh=62ab6a1627ac">Navigation apps like Google Maps</a> also use AI optimisation algorithms to find the best path in their navigation systems.</p>
<p>AI is also present in various everyday items. Robot <a href="https://www.sciencefocus.com/science/navigation-in-robot-vacuum-cleaners">vacuum cleaners use AI software</a> to process all their sensor inputs and deftly navigate our homes.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/574798/original/file-20240211-16-z7khhk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A shiny round vacuum cleaning up popcorn crumbs under a person on sofa wearing purple socks" src="https://images.theconversation.com/files/574798/original/file-20240211-16-z7khhk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/574798/original/file-20240211-16-z7khhk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/574798/original/file-20240211-16-z7khhk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/574798/original/file-20240211-16-z7khhk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/574798/original/file-20240211-16-z7khhk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/574798/original/file-20240211-16-z7khhk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/574798/original/file-20240211-16-z7khhk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Robot vacuums use AI to navigate our homes.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/young-girl-eating-popcorn-during-movie-787348768">Diego Cervo/Shutterstock</a></span>
</figcaption>
</figure>
<p>The most cutting-edge cars use <a href="https://www.prnewswire.com/news-releases/tenneco-supplying-intelligent-suspensions-for-new-mercedes-amg-sl-class-roadsters-301545384.html">AI in their suspension systems</a> so passengers can enjoy a smooth ride.</p>
<p>Of course, there is also no shortage of more quirky AI applications. A few years ago, UK-based brewery startup IntelligentX <a href="https://d3.harvard.edu/platform-digit/submission/intelligentx-changing-the-world-one-beer-at-a-time/">used AI to make custom beers</a> for its customers. Other breweries are also using AI to help them optimise beer production.</p>
<p>And <a href="https://www.media.mit.edu/projects/meet-the-ganimals/overview/">Meet the Ganimals</a> is a “collaborative social experiment” from the MIT Media Lab, which uses generative AI technologies to come up with new species that have never existed before.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/snapchats-creepy-ai-blunder-reminds-us-that-chatbots-arent-people-but-as-the-lines-blur-the-risks-grow-211744">Snapchat's 'creepy' AI blunder reminds us that chatbots aren't people. But as the lines blur, the risks grow</a>
</strong>
</em>
</p>
<hr>
<h2>AI can also be weaponised</h2>
<p>On a less lighthearted note, AI also has many applications in defence. In the wrong hands, some of these uses can be terrifying. </p>
<p>For example, <a href="https://www.belfercenter.org/publication/biosecurity-age-ai-whats-risk">some experts have warned</a> AI can aid the creation of bioweapons. This could happen through gene sequencing, helping non-experts easily produce risky pathogens such as novel viruses. </p>
<p>Where active warfare is taking place, military powers can design <a href="https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/JA-20/Crosby-Operationalizing-AI-1.pdf">warfare scenarios</a> and <a href="https://cetas.turing.ac.uk/sites/default/files/2023-06/cetas_research_report_-_ai_in_wargaming.pdf">plans</a> using AI. If a power uses such tools without applying ethical considerations or even deploys autonomous AI-powered weapons, it could have catastrophic consequences.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-killer-robots-mean-for-the-future-of-war-185243">What killer robots mean for the future of war</a>
</strong>
</em>
</p>
<hr>
<p>AI has been used in <a href="https://www.defense.gov/News/News-Stories/Article/Article/2730215/vice-admiral-discusses-potential-of-ai-in-missile-defense-testing-operations/">missile guidance systems</a> to maximise the effectiveness of a military’s operations. It can also be used to <a href="https://www.abc.net.au/news/2023-12-02/australia-to-test-tracking-chinese-submarines-with-ai-via-aukus/103181178">detect covertly operating submarines</a>.</p>
<p>In addition, AI can be used to predict and identify the activities and movements of terrorist groups. This way, intelligence agencies can come up with preventive measures. Since these types of AI systems have complex structures, they require high-processing power to get real-time insights.</p>
<p>Much has also been said about how generative AI is supercharging people’s abilities to produce fake news and disinformation. This has the potential to affect the democratic process and sway the outcomes of elections.</p>
<p>AI is present in our lives in so many ways, it is nearly impossible to keep track. Its myriad applications will affect us all.</p>
<p>This is why ethical and responsible use of AI, along with well-designed regulation, is more important than ever. This way we can reap the many benefits of AI while making sure we stay ahead of the risks.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/australia-plans-to-regulate-high-risk-ai-heres-how-to-do-that-successfully-221321">Australia plans to regulate 'high-risk' AI. Here's how to do that successfully</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/222985/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Niusha Shafiabady does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Artificial intelligence has been around for decades, and is much more than just ChatGPT. Here’s a rundown of some lesser known AI applications.Niusha Shafiabady, Associate Professor in Computational Intelligence, Charles Darwin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2200252024-02-08T16:54:26Z2024-02-08T16:54:26ZAI in the developing world: how ‘tiny machine learning’ can have a big impact<figure><img src="https://images.theconversation.com/files/574354/original/file-20240208-22-lty35i.jpg?ixlib=rb-1.1.0&rect=0%2C8%2C2000%2C1353&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A team in Argentina is using sensors based on TinyML technology to study _Chelonoidis chilensis_ tortoises. Little is known about its biology and the species is in a vulnerable state. The small sensors, in black on the shell, are small enough to allow the animal to move freely. </span> <span class="attribution"><span class="license">Author provided</span></span></figcaption></figure><p>The landscape of artificial intelligence (AI) applications has traditionally been dominated by the use of resource-intensive servers centralised in industrialised nations. However, recent years have witnessed the emergence of small, energy-efficient devices for AI applications, a concept known as <a href="https://www.datacamp.com/blog/what-is-tinyml-tiny-machine-learning">tiny machine learning</a> (TinyML).</p>
<p>We’re most familiar with consumer-facing applications such as <a href="https://arxiv.org/abs/1711.07128">Siri, Alexa, and Google Assistant</a>, but the limited cost and small size of such devices allow them to be deployed in the field. For example, the technology has been used to <a href="https://dl.acm.org/doi/10.1145/3582515.3609514">detect mosquito wingbeats and so help prevent the spread of malaria</a>. It’s also been part of the <a href="https://www.smartparks.org/opencollar-io/">development of low-power animal collars to support conservation efforts</a>.</p>
<h2>Small size, big impact</h2>
<p>Distinguished by their small size and low cost, TinyML devices operate within constraints reminiscent of the dawn of the personal-computer era – memory is measured in kilobytes and hardware can be had for as little as US$1. This is possible because TinyML doesn’t require a laptop computer or even a mobile phone. Instead, it can instead run on simple microcontrollers that power standard electronic components worldwide. In fact, given that there are already <a href="https://venturebeat.com/ai/why-tinyml-is-a-giant-opportunity/">250 billion microcontrollers deployed globally</a>, devices that support TinyML are already available at scale.</p>
<p>A number of development packages for TinyML applications are available. Two popular options are <a href="https://store-usa.arduino.cc/products/arduino-tiny-machine-learning-kit">Arduino</a> and <a href="https://www.seeedstudio.com/XIAO-ESP32S3-Sense-p-5639.html">Seeed Studio</a>, both of which come with additional sensors for audio, vision, and motion-based applications.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/573124/original/file-20240202-23-rgy9pf.jpg?ixlib=rb-1.1.0&rect=0%2C48%2C4031%2C2764&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/573124/original/file-20240202-23-rgy9pf.jpg?ixlib=rb-1.1.0&rect=0%2C48%2C4031%2C2764&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/573124/original/file-20240202-23-rgy9pf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/573124/original/file-20240202-23-rgy9pf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/573124/original/file-20240202-23-rgy9pf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/573124/original/file-20240202-23-rgy9pf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/573124/original/file-20240202-23-rgy9pf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/573124/original/file-20240202-23-rgy9pf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">TinyML workshop at Universiti Kebangsaan, Malaysia, 2023. Participants working on the ‘smile’ or ‘serious’ face-detection application.</span>
<span class="attribution"><span class="source">Marco Zennaro</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<h2>How does it work?</h2>
<p>Like classical machine learning, TinyML involves data collection – often from Internet of Things (IoT) devices – and cloud-based training. Let’s consider an outdoor object-detection application – for example, counting the number of cars on a street to see how heavy the traffic there is. In the classical ML process, images have to be gathered using a webcam and sent to a cloud server where the training takes place. Once the trained model provides an acceptable level of accuracy, the system is ready to detect cars from a new video feed. The ML model runs on the cloud, so an Internet connection is necessary.</p>
<p>In the TinyML system, however, the model is deployed on the device itself and is ready to detect objects with no need for connectivity. The first part of the process (gathering data and training the model on the cloud) follows the classical ML model but the inference phase (detecting objects) runs on the device itself. This is how TinyML diverges from traditional server-based architectures: it deploys pre-trained compact models optimised for limited resources onto embedded devices, enabling real-time, low-power data analysis and decision-making, all independent of cloud connectivity.</p>
<p>TinyML offers several advantages over traditional centralised server-based models:</p>
<ul>
<li><p>Affordability: the technology’s low cost makes these devices accessible to a wide range of users including educational institutions and students in the developing world.</p></li>
<li><p>Sustainability: the modest energy consumption produces a <a href="https://dl.acm.org/doi/10.1145/3608473">low carbon footprint</a>, reducing impact on the environment.</p></li>
<li><p>Flexibility and scalability: it enables the development of applications that address the needs of local communities rather than global agendas.</p></li>
<li><p>Internet independent: Because everything is embedded, TinyML devices can operate without online connectivity. This is particularly beneficial for the third of the world that still does not have Internet access.</p></li>
</ul>
<p>TinyML applications already power <a href="https://cms.tinyml.org/wp-content/uploads/summit2021/tinyMLSummit2021d3_tinyTalks_Gandhi.pdf">personalised sensors for athletics and provide localisation where GPS isn’t available</a>. They’re also employed by startups such as <a href="https://usefulsensors.com/">Useful Sensors</a>, which offers privacy-conserving conversational agents, QR code scanners, and person-detection hardware. Only through the use of TinyML could these smart devices run on the low-cost, low-power microcontrollers.</p>
<h2>Developing in the Global South</h2>
<p>To help the use of TinyML grow in regions where a centralised machine-learning model would face significant challenges, we built <a href="https://tinymledu.org/4d">TinyML4D</a>, a network of academic institutions in developing countries. It already includes more than 40 countries spanning the Global South from Columbia to Ethiopia to Malaysia.</p>
<p>With support from UNESCO’s International Centre for Theoretical Physics (ICTP) and from Harvard University’s John A. Paulson School of Engineering and Applied Science, the network was launched in 2021. Its aim is to develop a community of educators, researchers and practitioners focused on both improving access to TinyML education, and developing innovative solutions to address the unique challenges faced by developing countries.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/573122/original/file-20240202-19-rq3yjg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/573122/original/file-20240202-19-rq3yjg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/573122/original/file-20240202-19-rq3yjg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=216&fit=crop&dpr=1 600w, https://images.theconversation.com/files/573122/original/file-20240202-19-rq3yjg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=216&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/573122/original/file-20240202-19-rq3yjg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=216&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/573122/original/file-20240202-19-rq3yjg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=271&fit=crop&dpr=1 754w, https://images.theconversation.com/files/573122/original/file-20240202-19-rq3yjg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=271&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/573122/original/file-20240202-19-rq3yjg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=271&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Map of the TinyML Academic Network. More than 50 universities are part of the network as of February 2024.</span>
<span class="attribution"><span class="source">Marcelo Rovai</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>To make all this possible, we needed to develop ways to share educational resources globally. Initial efforts included distributing TinyML hardware kits to selected universities with budgetary challenges. We also organised global and regional (Africa, Latin America, and Asia) workshops and training sessions. Using a mixture of in-person, online and hybrid methods, we’ve reached more 1,000 participants in over than 50 countries. The combination of no-cost or low-cost hardware resources, combined with open-source course materials and workshops has enabled TinyML to be taught by many of our network members in their home countries.</p>
<p>Beyond our workshops and training activities, we have launched a series of regional collaborations, outreach activities and virtual “show and tell” events to share best practices and augment our network’s impact among practitioners. Throughout, there has been a strong focus on addressing the United Nations’ sustainable development goals (SDGs).</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/573125/original/file-20240202-27-wcu4vh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/573125/original/file-20240202-27-wcu4vh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/573125/original/file-20240202-27-wcu4vh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=797&fit=crop&dpr=1 600w, https://images.theconversation.com/files/573125/original/file-20240202-27-wcu4vh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=797&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/573125/original/file-20240202-27-wcu4vh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=797&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/573125/original/file-20240202-27-wcu4vh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1001&fit=crop&dpr=1 754w, https://images.theconversation.com/files/573125/original/file-20240202-27-wcu4vh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1001&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/573125/original/file-20240202-27-wcu4vh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1001&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Workshop at Kobe Institute of Computing, Japan, in 2023. Participants are working on ‘keyword spotting’ applications, developing their personal Alexa/Google Home on a $10 device. The system can be trained to recognise local dialects.</span>
<span class="attribution"><span class="source">Marco Zennaro</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>These collaborations have led to multiple peer-reviewed papers on TinyML applications. In addition to the solution to <a href="https://dl.acm.org/doi/10.1145/3524458.3547258">detect mosquito species</a>, which could lead to more efficient malaria-control campaigns, others include the <a href="https://dl.acm.org/doi/10.1145/3586991">responsible use of intelligent sensors</a> and low-cost solutions to <a href="https://www.youtube.com/watch?v=XkZEFzBfiJI">monitoring atrial fibrillation and sinus rhythm</a>. They’re also used by Cornell University’s <a href="https://www.elephantlisteningproject.org/about-elp/">“Elephant Listening Project”</a> as well <a href="https://arxiv.org/pdf/2105.11493.pdf">monitoring water quality in aquaculture to help make it more sustainable</a>, a project supported by EU’s <a href="https://research-and-innovation.ec.europa.eu/funding/funding-opportunities/funding-programmes-and-open-calls/horizon-2020_en">Horizon 2020</a> programme.</p>
<h2>Looking forward</h2>
<p>TinyML represents a transformative approach to artificial intelligence and is especially pertinent to developing countries. It offers a sustainable path toward democratising AI technology, fostering local innovation, and addressing regional challenges.</p>
<p>The growth of TinyML devices and applications is not without potential challenges and risks, however. The number of applications and devices is expected to rise from the millions shipped today to <a href="https://www.abiresearch.com/press/tinyml-device-shipments-grow-25-billion-2030-15-million-2020/">2.5 billion devices in 2030</a>, and that could lead to increased electronic waste due to the low-cost nature of devices. There’s also the risk of embedded biases in critical ML models – because they operate standalone, there’s no option for updates. Finally, there are privacy concerns due to the discrete integration of devices in the environment. As the field evolves, it will be crucial to navigate these issues responsibly, and so help ensure that TinyML remains a tool for positive change and sustainable development.</p>
<hr>
<p><em>UNESCO’s duty remains to reaffirm the humanist missions of education, science and culture. Mobilise education to transform lives; Reconcile with the living; Promote inclusion and mutual understanding; Foster science and technology at the service of humanity are UNESCO’s key strategic objectives.</em></p><img src="https://counter.theconversation.com/content/220025/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'ont déclaré aucune autre affiliation que leur organisme de recherche.</span></em></p>Traditionally dominated by the use of centralised, resource-intensive servers, machine learning is being democratised with the growth of “TinyML”, distinguished by its small size and low cost.Marco Zennaro, Coordinator, Science, Technology and Innovation Unit, Abdus Salam International Centre for Theoretical Physics (ICTP)Brian Plancher, Assistant Professor of Computer Science, Barnard CollegeMatthew Stewart, Postdoctoral Researcher, Harvard UniversityVijay Janapa Reddi, John L. Loeb Associate Professor of Engineering and Applied Sciences, Harvard UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2182182024-01-03T13:44:46Z2024-01-03T13:44:46ZAI is here – and everywhere: 3 AI researchers look to the challenges ahead in 2024<figure><img src="https://images.theconversation.com/files/566947/original/file-20231220-21-tpogin.jpg?ixlib=rb-1.1.0&rect=0%2C5%2C3840%2C2149&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">AI has arrived. How will it change society in the year ahead?</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/robot-in-sci-fi-tonnel-concept-of-future-3d-royalty-free-image/1292600478">Pavel_Chag/iStock via Getty Images</a></span></figcaption></figure><p><em>2023 was an inflection point in the evolution of artificial intelligence and its role in society. The year saw the <a href="https://theconversation.com/generative-ai-5-essential-reads-about-the-new-era-of-creativity-job-anxiety-misinformation-bias-and-plagiarism-203746">emergence of generative AI</a>, which moved the technology from the shadows to center stage in the public imagination. It also saw <a href="https://theconversation.com/openai-is-a-nonprofit-corporate-hybrid-a-management-expert-explains-how-this-model-works-and-how-it-fueled-the-tumult-around-ceo-sam-altmans-short-lived-ouster-218340">boardroom drama</a> in an AI startup dominate the news cycle for several days. And it saw the Biden administration issue <a href="https://theconversation.com/biden-administration-executive-order-tackles-ai-risks-but-lack-of-privacy-laws-limits-reach-216694">an executive order</a> and the European Union <a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence">pass a law</a> aimed at regulating AI, moves perhaps best described as attempting to bridle a horse that’s already galloping along.</em></p>
<p><em>We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.</em></p>
<p></p><hr><p></p>
<p><strong>Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder</strong></p>
<p>2023 was the <a href="http://bit.ly/ai-ethics-news">year of AI hype</a>. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though I think that anticipating future harms is a critical component of <a href="https://theconversation.com/ai-has-social-consequences-but-who-pays-the-price-tech-companies-problem-with-ethical-debt-203375">overcoming ethical debt in tech</a>, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.</p>
<p>One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, <a href="https://cfiesler.medium.com/chatgpt-wrapped-an-ais-year-in-review-dc37252c494f">most relevant headlines focused on</a> how students might use it to cheat and how educators were scrambling to keep them from doing so – in ways that <a href="https://www.thedailybeast.com/ai-written-homework-is-rising-so-are-false-accusations">often do more harm than good</a>.</p>
<p>However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools <a href="https://www.nytimes.com/2023/08/24/business/schools-chatgpt-chatbot-bans.html">rescinded their bans</a>. I don’t think we should be revamping education to put AI at the center of everything, but if students don’t learn about how AI works, they won’t understand its limitations – and therefore how it is useful and appropriate to use and how it’s not. This isn’t just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.</p>
<p>So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, <a href="https://dl.acm.org/doi/10.1145/365153.365168">wrote that</a> machines are “often sufficient to dazzle even the most experienced observer,” but that once their “inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.” The challenge with generative artificial intelligence is that, in contrast to ELIZA’s very basic pattern matching and substitution methodology, it is much more difficult to find language “sufficiently plain” to make the AI magic crumble away. </p>
<p>I think it’s possible to make this happen. I hope that universities that are <a href="https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/05/19/colleges-race-hire-and-build-amid-ai-gold">rushing to hire more technical AI experts</a> put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/eXdVDhOGqoE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Many of the challenges in the year ahead have to do with problems of AI that society is already facing.</span></figcaption>
</figure>
<p></p><hr><p></p>
<p><strong>Kentaro Toyama, Professor of Community Information, University of Michigan</strong></p>
<p>In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, <a href="https://books.google.co.jp/books?id=2FMEAAAAMBAJ&pg=PA58">told Life magazine</a>, “In from three to eight years we will have a machine with the general intelligence of an average human being.” With <a href="https://futurism.com/singularity-explain-it-to-me-like-im-5-years-old">the singularity</a>, the moment artificial intelligence matches and begins to exceed human intelligence – not quite here yet – it’s safe to say that Minsky was off by at least a factor of 10. It’s perilous to make predictions about AI. </p>
<p>Still, making predictions for a year out doesn’t seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minsky’s prime, but the public <a href="https://theconversation.com/chatgpt-turns-1-ai-chatbots-success-says-as-much-about-humans-as-technology-218704">release of ChatGPT in 2022</a> kicked off an all-out competition for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applications. </p>
<p>The big technical question is how soon and how thoroughly AI engineers can address the current Achilles’ heel of <a href="https://www.mathworks.com/discovery/deep-learning.html">deep learning</a> – what might be called <a href="https://medium.com/@kentarotoyama/characterizing-generative-ai-circa-2023-d73a4d334bef">generalized hard reasoning</a>, things like <a href="https://www.livescience.com/21569-deduction-vs-induction.html">deductive logic</a>. Will quick tweaks to existing <a href="https://theconversation.com/what-is-a-neural-network-a-computer-scientist-explains-151897">neural-net</a> algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist <a href="https://dblp.org/pid/164/5919.html">Gary Marcus</a> <a href="https://arxiv.org/ftp/arxiv/papers/2002/2002.06177.pdf">suggests</a>? Armies of AI scientists are working on this problem, so I expect some headway in 2024. </p>
<p>Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire – comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite <a href="https://www.nytimes.com/2023/01/22/business/media/deepfake-regulation-difficulty.html">nascent regulation</a>, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldn’t have been possible even five years ago. </p>
<p>Speaking of problems, the very people sounding the loudest alarms about AI – like <a href="https://edition.cnn.com/2023/04/17/tech/elon-musk-ai-warning-tucker-carlson/index.html">Elon Musk</a> and <a href="https://edition.cnn.com/2023/10/31/tech/sam-altman-ai-risk-taker/index.html">Sam Altman</a> – can’t seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, what I most hope for 2024 – though it seems slow in coming – is stronger AI regulation, at national and international levels. </p>
<p></p><hr><p></p>
<p><strong>Anjana Susarla, Professor of Information Systems, Michigan State University</strong></p>
<p>In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace. In contrast to <a href="https://en.wikipedia.org/wiki/ChatGPT">ChatGPT a year back</a>, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, <a href="https://www.nytimes.com/2023/12/01/podcasts/transcript-ezra-klein-interviews-casey-newton-kevin-roose.html">but also from videos on YouTube, songs on Spotify</a>, and other audio and visual information. With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video. </p>
<p>Companies are racing to <a href="https://llm.mlc.ai">develop LLMs that can be deployed</a> on a variety of hardware and in a variety of applications, including running an LLM on your smartphone. The emergence of these <a href="https://arxiv.org/pdf/2306.02707.pdf">lightweight LLMs</a> and <a href="https://hai.stanford.edu/sites/default/files/2023-12/Governing-Open-Foundation-Models.pdf">open source LLMs</a> could usher in a <a href="https://www.oneusefulthing.org/p/an-ai-haunted-world">world of autonomous AI agents</a> – a world that society is not necessarily prepared for. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1736713832045982040"}"></div></p>
<p>These advanced AI capabilities offer immense transformative power in applications ranging from <a href="https://cloud.google.com/blog/products/ai-machine-learning/multimodal-generative-ai-search">business</a> to <a href="https://doi.org/10.1186/s12909-023-04698-z">precision medicine</a>. My chief concern is that such advanced capabilities will pose new challenges for <a href="https://doi.org/10.1073/pnas.2208839120">distinguishing between human-generated content and AI-generated content</a>, as well as pose new types of <a href="https://doi.org/10.1038/d41586-023-00340-6">algorithmic harms</a>. </p>
<p>The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can <a href="https://doi.org/10.48550/arXiv.2310.00737">manufacture synthetic identities</a> and orchestrate <a href="https://doi.org/10.48550/arXiv.2305.06972">large-scale misinformation</a>. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as <a href="https://doi.org/10.1145/3498366.3505816">information verification, information literacy and serendipity</a> provided by search engines, social media platforms and digital services. </p>
<p>The Federal Trade Commission has warned <a href="https://www.ftc.gov/news-events/news/press-releases/2023/11/ftc-authorizes-compulsory-process-ai-related-products-services">about fraud, deception, infringements on privacy</a> and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube <a href="https://blog.youtube/inside-youtube/our-approach-to-responsible-ai-innovation/">have instituted policy guidelines</a> for disclosure of AI-generated content, there’s a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy & Protection Act. </p>
<p>A new <a href="https://bluntrochester.house.gov/news/documentsingle.aspx?DocumentID=4062">bipartisan bill</a> introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.</p><img src="https://counter.theconversation.com/content/218218/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anjana Susarla receives funding from the National Institute of Health and the Omura-Saxena Professorship in Responsible AI</span></em></p><p class="fine-print"><em><span>Casey Fiesler receives funding from the National Science Foundation, and is currently a funded Visiting Fellow with the Notre Dame-IBM Tech Ethics Lab.</span></em></p><p class="fine-print"><em><span>Kentaro Toyama's research is funded in part by the National Science Foundation. </span></em></p>Artificial intelligence is everywhere, and the tech industry is racing along to develop ever more powerful AIs. Three scholars look ahead to the next chapter in this technological revolution.Anjana Susarla, Professor of Information Systems, Michigan State UniversityCasey Fiesler, Associate Professor of Information Science, University of Colorado BoulderKentaro Toyama, Professor of Community Information, University of MichiganLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2114202023-12-13T19:04:04Z2023-12-13T19:04:04ZAI can already diagnose depression better than a doctor and tell you which treatment is best<figure><img src="https://images.theconversation.com/files/563806/original/file-20231206-29-djskol.jpg?ixlib=rb-1.1.0&rect=28%2C9%2C6339%2C4229&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/portrait-patient-lying-on-ct-mri-1948882015">Shutterstock</a></span></figcaption></figure><p>Artificial intelligence (AI) is poised to revolutionise the way we diagnose and treat illness. It could be particularly helpful for depression because it could make more accurate diagnoses and determine which treatments are more likely to work. </p>
<p>Some <a href="https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2671413">20% of us</a> will have depression at least once in our lifetimes. Around the world,
<a href="https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(18)32279-7/fulltext">300 million</a> people are currently experiencing depression, with <a href="https://www.abs.gov.au/statistics/health/mental-health/national-study-mental-health-and-wellbeing/2020-2022">1.5 million</a> Australians likely to be depressed at any one time. Because of this, depression has been described by the <a href="https://iris.who.int/bitstream/handle/10665/254610/W?sequence=1">World Health Organization</a> as the single biggest contributor to ill health around the world. </p>
<p>So how exactly could AI help?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/depression-isnt-just-sadness-its-often-a-loss-of-pleasure-210429">Depression isn't just sadness – it's often a loss of pleasure</a>
</strong>
</em>
</p>
<hr>
<h2>Depression can be hard to spot</h2>
<p>Despite its frequency, depression is difficult to diagnose. So hard, in fact, that general practitioners accurately detect depression in <a href="https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(09)60879-5/fulltext">less than half</a> of cases.</p>
<p>This is because there is no one test for depression: doctors use self-reported symptoms, questionnaires and clinical observations to make a diagnosis. But <a href="https://www.mayoclinic.org/diseases-conditions/depression/symptoms-causes/syc-20356007">symptoms of depression</a> are not the same for everyone. Some people may sleep more, others sleep less; some people lack energy and interest in activities, while others may feel sad or irritable. </p>
<p>For those who are accurately diagnosed with depression, there are a range of <a href="https://www.blackdoginstitute.org.au/resources-support/depression/treatment/">treatment options</a> including talk therapy, medications and lifestyle change. However, response to treatment is different for each person, and we have no way to know ahead of time which treatments will work and which won’t. </p>
<p>AI trains computers to think like humans, with a particular focus on three human-like behaviours: learning, reasoning and self-correction (to fine-tune and improve performance over time). One branch of AI is machine learning, the goal of which is to train computers to learn, find patterns in data and make data-informed predictions without guidance from humans. </p>
<p>In recent years there has been a surge in research applying AI to illnesses like depression, which can be difficult to diagnose and treat. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/563808/original/file-20231206-20-9nfvdv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="man sits with head in hands opposite clinician with clipboard checklist" src="https://images.theconversation.com/files/563808/original/file-20231206-20-9nfvdv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/563808/original/file-20231206-20-9nfvdv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/563808/original/file-20231206-20-9nfvdv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/563808/original/file-20231206-20-9nfvdv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/563808/original/file-20231206-20-9nfvdv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/563808/original/file-20231206-20-9nfvdv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/563808/original/file-20231206-20-9nfvdv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Doctors usually diagnose depression via questionnaires and self-ratings.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/people-have-stressed-problem-consultation-psychologist-1897694278">Shutterstock</a></span>
</figcaption>
</figure>
<h2>What they’ve found so far</h2>
<p>Scientists have compared <a href="https://chat.openai.com/auth/login">ChatGPT</a> diagnoses and medical recommendations to those of real-life doctors with <a href="https://fmch.bmj.com/content/11/4/e002391">surprising results</a>. When given information on fictional patients of varied depression severity, sex and socioeconomic status, ChatGPT mostly recommended talk therapy. In contrast, doctors recommended antidepressants. </p>
<p><a href="https://www.nice.org.uk/guidance/ng222">US</a>, <a href="https://www.rcpsych.ac.uk/docs/default-source/improving-care/better-mh-policy/position-statements/ps04_19---antidepressants-and-depression.pdf?sfvrsn=ddea9473_5">British</a> and <a href="https://www.tg.org.au/">Australian</a> guidelines recommend talk therapy as the first treatment option ahead of medication. </p>
<p>This suggests ChatGPT may be more likely to follow clinical guidelines, whereas GPs may have a tendency to <a href="https://theconversation.com/are-antidepressants-over-prescribed-in-australia-11788">overprescribe</a> antidepressants. </p>
<p>ChatGPT is also less influenced by sex and socioeconomic biases, while doctors are statistically <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0052429">more likely to prescribe antidepressants to men</a>, especially those in blue-collar jobs. </p>
<h2>How depression affects the brain</h2>
<p>Depression affects specific parts of the brain. My research has shown that the areas of the brain affected by depression are <a href="https://www.nature.com/articles/s41398-019-0512-8">extremely similar</a> in different people. So much so, we can predict whether someone has depression or not with more than 80% accuracy just by looking at these brain structures on MRI scans. </p>
<p>Other <a href="https://www.frontiersin.org/articles/10.3389/fnagi.2022.912283/full">research</a> using advanced AI models has supported this finding, suggesting brain structure may be a helpful direction for AI-based diagnosis. </p>
<p>Studies using magnetic resonance imaging (MRI) data on <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28578?casa_token=7iqdP3r0DgoAAAAA:wyEHaiB5f-WdmIyoUGfpp6azxVuRG4VXwpXKQ6SandIbLmuslH6bnZXq9HhAiY3-famw8VwCjv_iH8k">brain function at rest</a> can also predict depression correctly more than 80% of the time. </p>
<p>However, combining functional and structural information from MRI gives the best accuracy, correctly predicting depression in over <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/jmri.28650?casa_token=R-RQBGPfnqYAAAAA:5Ya81irf7UquSLWfBGYj919hFsYgBxXp2MicrqEMFyw4kfmTO6bxRcd2TdFM1FPQxsd8_-Ffw4DRx8Q">93% of cases</a>. This suggests using multiple brain imaging techniques for AI to detect depression may be the most viable way forward.</p>
<p>MRI-based AI tools are currently only used for research purposes. But as MRI scans become cheaper, faster and more <a href="https://www.science.org/content/article/mri-all-cheap-portable-scanners-aim-revolutionize-medical-imaging">portable</a>, it’s likely this kind of technology will soon be part of your doctor’s <a href="https://bmcmededuc.biomedcentral.com/articles/10.1186/s12909-023-04698-z#:%7E:text=Results,performance%20in%20several%20healthcare%20aspects.">toolkit</a>, helping them to improve diagnosis and enhance patient care. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/transcranial-magnetic-stimulation-can-treat-depression-developing-research-suggests-it-could-also-help-autism-adhd-and-ocd-211417">Transcranial magnetic stimulation can treat depression. Developing research suggests it could also help autism, ADHD and OCD</a>
</strong>
</em>
</p>
<hr>
<h2>The diagnostic tools you might have already</h2>
<p>While MRI-based AI applications are promising, a simpler and easier method of detecting depression may be at hand, quite literally. </p>
<p>Wearable devices like smart watches are being investigated for their ability to <a href="https://www.nature.com/articles/s41746-023-00828-5">detect and predict</a> depression. Smart watches are especially helpful because they can collect a wide variety of data including heart rate, step counts, metabolic rate, sleep data and social interaction. </p>
<p>A recent <a href="https://www.nature.com/articles/s41746-023-00828-5#Sec2">review</a> of all studies done so far on using wearables to assess depression found depression was correctly predicted 70–89% of the time. Since they are commonly used and worn around the clock, this research suggests wearable devices could provide unique data that might otherwise be hard to collect. </p>
<p>There are some <a href="https://www.thelancet.com/journals/landig/article/PIIS2589-7500(22)00194-7/fulltext">drawbacks</a>, however, including the substantial cost of smart devices which may be inaccessible to many. Others include the questioned ability of smart devices to detect biological data in <a href="https://www.nejm.org/doi/full/10.1056/NEJMc2029240">people of colour</a> and the <a href="https://www.nature.com/articles/s41746-021-00408-5">lack of diversity</a> in study populations. </p>
<p>Studies have also turned to social media to detect depression. Using AI, scientists have predicted the presence and severity of depression from the <a href="https://journals.sagepub.com/doi/10.1177/0165551517740835">language of our posts and community memberships</a> on social media platforms. The specific words that were used predicted depression with up to <a href="https://www.sciencedirect.com/science/article/abs/pii/S0933365723002300">90% success</a> rates in both English and Arabic. Depression has also been successfully detected in its early stages from the <a href="https://www.sciencedirect.com/science/article/abs/pii/S2468696422000283">emojis we use</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/563811/original/file-20231206-29-s2excn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="man's hands tipping out one capsule with glass of water nearby on table" src="https://images.theconversation.com/files/563811/original/file-20231206-29-s2excn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/563811/original/file-20231206-29-s2excn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/563811/original/file-20231206-29-s2excn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/563811/original/file-20231206-29-s2excn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/563811/original/file-20231206-29-s2excn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/563811/original/file-20231206-29-s2excn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/563811/original/file-20231206-29-s2excn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Doctors are statistically more likely to prescribe antidepressants to men.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/senior-man-feeling-stressed-depressed-takes-2337502457">Shutterstock</a></span>
</figcaption>
</figure>
<h2>Predicting responses to treatment</h2>
<p>Several studies have found antidepressant <a href="https://prcp.psychiatryonline.org/doi/10.1176/appi.prcp.20220015">treatment response</a> <a href="https://www.nature.com/articles/s41746-023-00817-8#Sec6">could be predicted</a> with more than 70% accuracy from electronic health records alone. This could provide doctors with more accurate evidence when prescribing medication-based treatments. </p>
<p>Combining data from people in trials for antidepressants, scientists have <a href="https://www.thelancet.com/journals/lanpsy/article/PIIS2215-0366(15)00471-X/fulltext">predicted</a> whether taking medications will help specific patients go into remission from depression.</p>
<p>AI shows substantial promise in the diagnosis and management of depression, however recent findings require validation before they can be relied upon as diagnostic tools. Until then, MRI scans, wearables and social media may be helpful to assist doctors diagnose and treat depression.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/netflix-psychiatrist-phil-stutz-says-85-of-early-therapy-gains-are-down-to-lifestyle-changes-is-he-right-195567">Netflix psychiatrist Phil Stutz says 85% of early therapy gains are down to lifestyle changes. Is he right?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/211420/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sarah Hellewell does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Research suggests AI could diagnose depression from health records or even social media posts. And it could overcome GP bias when it comes to prescribing medications.Sarah Hellewell, Research Fellow, Faculty of Health Sciences, Curtin University, and The Perron Institute for Neurological and Translational Science, Curtin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2193022023-12-08T05:11:49Z2023-12-08T05:11:49ZIsrael’s AI can produce 100 bombing targets a day in Gaza. Is this the future of war?<p>Last week, reports emerged that the Israel Defense Forces (IDF) are using an artificial intelligence (AI) system <a href="https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/">called Habsora</a> (Hebrew for “The Gospel”) to select targets in the war on Hamas in Gaza. The system has reportedly been used to <a href="https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets">find more targets for bombing</a>, to link locations to Hamas operatives, and to estimate likely numbers of civilian deaths in advance.</p>
<p>What does it mean for AI targeting systems like this to be used in conflict? My research into the social, political and ethical implications of military use of remote and autonomous systems shows AI is already altering the character of war. </p>
<p>Militaries use remote and autonomous systems as “force multipliers” to increase the impact of their troops and protect their soldiers’ lives. AI systems can make soldiers more efficient, and are likely to enhance the speed and lethality of warfare – even as humans become less visible on the battlefield, instead gathering intelligence and targeting from afar. </p>
<p>When militaries can kill at will, with little risk to their own soldiers, will the current ethical thinking about war prevail? Or will the increasing use of AI also increase the dehumanisation of adversaries and the disconnect between wars and the societies in whose names they are fought?</p>
<h2>AI in war</h2>
<p>AI is having an impact at all levels of war, from “intelligence, surveillance and reconnaissance” support, like the IDF’s Habsora system, through to “lethal autonomous weapons systems” that can choose and attack targets <a href="https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems">without human intervention</a>.</p>
<p>These systems have the potential to reshape the character of war, making it easier to enter into a conflict. As complex and distributed systems, they may also make it more difficult to signal one’s intentions – or interpret those of an adversary – in the context of an escalating conflict.</p>
<p>To this end, AI can <a href="https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning">contribute to mis- or disinformation</a>, creating and amplifying dangerous misunderstandings in times of war. </p>
<p>AI systems may increase the human tendency to trust suggestions from machines (this is highlighted by the Habsora system, named after the infallible word of God), opening up uncertainty over <a href="https://www.tandfonline.com/doi/abs/10.1080/15027570.2018.1481907">how far to trust</a> autonomous systems. The boundaries of an AI system that interacts with other technologies and with people may not be clear, and there may be <a href="https://www.jstor.org/stable/j.ctv11g97wm">no way to know who or what has “authored” its outputs</a>, no matter how objective and rational they may seem.</p>
<h2>High-speed machine learning</h2>
<p>Perhaps one of the most basic and important changes we are likely to see driven by AI is an increase in the speed of warfare. This may change how we understand <a href="https://www.rand.org/pubs/research_reports/RR2797.html">military deterrence</a>, which assumes humans are the primary actors and sources of intelligence and interaction in war.</p>
<p>Militaries and soldiers frame their decision-making through what is called the “<a href="https://fhs.brage.unit.no/fhs-xmlui/bitstream/handle/11250/2683228/Boyds%20OODA%20Loop%20Necesse%20vol%205%20nr%201.pdf?sequence=1&isAllowed=y">OODA loop</a>” (for observe, orient, decide, act). A faster OODA loop can help you outmanoeuvre your enemy. The goal is to avoid slowing down decisions through excessive deliberation, and instead to match the accelerating tempo of war.</p>
<p>So the use of AI is potentially justified on the basis it can interpret and synthesise huge amounts of data, processing it and delivering outputs at rates that far surpass human cognition. </p>
<p>But where is the space for ethical deliberation in an increasingly fast and data-centric OODA loop cycle happening at a safe distance from battle?</p>
<p>Israel’s targeting software is an example of this acceleration. A former head of the IDF has <a href="https://www.ynetnews.com/magazine/article/ry0uzlhu3">said</a> that human intelligence analysts might produce 50 bombing targets in Gaza each year, but the Habsora system can produce 100 targets a day, along with real-time recommendations for which ones to attack.</p>
<p>How does the system produce these targets? It does so through probabilistic reasoning offered by machine learning algorithms.</p>
<p>Machine learning algorithms learn through data. They learn by seeking patterns in huge piles of data, and their success is contingent on the data’s quality and quantity. They make recommendations based on probabilities. </p>
<p>The probabilities are based on pattern-matching. If a person has enough similarities to other people labelled as an enemy combatant, they too may be labelled a combatant themselves. </p>
<h2>The problem of AI enabled targeting at a distance</h2>
<p>Some claim machine learning enables <a href="https://philpapers.org/rec/ARKTCF">greater precision in targeting</a>, which makes it easier to avoid harming innocent people and using a proportional amount of force. However, the idea of more precise targeting of airstrikes has not been successful in the past, as the high toll of <a href="https://airwars.org/">declared and undeclared civilian casualties</a> from the global war on terror shows. </p>
<p>Moreover, the difference between a combatant and a civilian is <a href="https://philpapers.org/rec/WILSAU">rarely self-evident</a>. Even humans frequently cannot tell who is and is not a combatant.</p>
<p>Technology does not change this fundamental truth. Often social categories and concepts are not objective, but are contested or specific to time and place. But computer vision together with algorithms are more effective in predictable environments where concepts are objective, reasonably stable, and internally consistent. </p>
<h2>Will AI make war worse?</h2>
<p>We live in a time of <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-8497.2005.00393.x">unjust wars</a> and military occupations, egregious <a href="https://www.defence.gov.au/about/reviews-inquiries/afghanistan-inquiry">violations of the rules of engagement</a>, and an incipient <a href="https://www.nytimes.com/2023/03/25/world/asia/asia-china-military-war.html">arms race</a> in the face of US–China rivalry. In this context, the inclusion of AI in war may add new complexities that exacerbate, rather than prevent, harm. </p>
<p>AI systems make it easier for actors in war to <a href="https://doi.org/10.48550/arXiv.1802.07228">remain anonymous</a>, and can render invisible the source of violence or the decisions which lead to it. In turn, we may see increasing disconnection between militaries, soldiers, and civilians, and the wars being fought in the name of the nation they serve.</p>
<p>And as AI grows more common in war, militaries will develop countermeasures to undermine it, creating a loop of escalating militarisation. </p>
<h2>What now?</h2>
<p>Can we control AI systems to head off a future in which warfare is driven by increasing reliance on technology underpinned by learning algorithms? Controlling AI development in any area, particularly via laws and regulations, has proven difficult.</p>
<p>Many suggest we need better laws to account for systems underpinned by machine learning, but even this is not straightforward. Machine learning algorithms are <a href="https://www.technologyreview.com/2017/06/01/151447/secret-algorithms-threaten-the-rule-of-law/">difficult to regulate</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/us-military-plans-to-unleash-thousands-of-autonomous-war-robots-over-next-two-years-212444">US military plans to unleash thousands of autonomous war robots over next two years</a>
</strong>
</em>
</p>
<hr>
<p>AI-enabled weapons may program and update themselves, evading legal requirements for certainty. The engineering maxim “software is never done” implies that the law may never match the speed of technological change.</p>
<p>The quantitative act of estimating likely numbers of civilian deaths in advance, which the Habsora system does, does not tell us much about the qualitative dimensions of targeting. Systems like Habsora in isolation cannot really tell us much about whether a strike would be ethical or legal (that is, whether it is proportionate, discriminate and necessary, among other considerations).</p>
<p>AI should support democratic ideals, not undermine them. Trust in governments, institutions, and militaries <a href="https://www.un.org/development/desa/dspd/2021/07/trust-public-institutions/">is eroding</a> and needs to be restored if we plan to apply AI across a range of military practices. We need to deploy critical ethical and political analysis to interrogate emerging technologies and their effects so any form of military violence is considered to be the last resort.</p>
<p>Until then, machine learning algorithms are best kept separate from targeting practices. Unfortunately, the world’s armies are heading in the opposite direction.</p><img src="https://counter.theconversation.com/content/219302/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bianca Baggiarini does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>AI systems will accelerate the pace of war.Bianca Baggiarini, Lecturer, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2185172023-12-05T22:42:16Z2023-12-05T22:42:16ZWikipedia’s volunteer editors are fleeing online abuse. Here’s what that could mean for the internet (and you)<figure><img src="https://images.theconversation.com/files/562311/original/file-20231129-17-hg57m4.jpg?ixlib=rb-1.1.0&rect=44%2C11%2C7304%2C4120&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>We’re now sadly used to seeing toxic exchanges play out on social media platforms like X (formerly Twitter), Facebook and TikTok. </p>
<p>But Wikipedia is a reference work. How heated can people get over an encyclopedia? </p>
<p>Our <a href="https://academic.oup.com/pnasnexus/article-lookup/doi/10.1093/pnasnexus/pgad385">research</a>, published today, shows the answer is very heated. For example, one Wikipedia editor wrote to another:</p>
<blockquote>
<p>i will find u in real life and slit your throat.</p>
</blockquote>
<p>That’s a problem for many reasons, but chief among them is if Wikipedia goes down in a ball of toxic fire, it might take the rest of the internet’s information infrastructure with it. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/let-the-community-work-it-out-throwback-to-early-internet-days-could-fix-social-medias-crisis-of-legitimacy-213209">Let the community work it out: Throwback to early internet days could fix social media's crisis of legitimacy</a>
</strong>
</em>
</p>
<hr>
<h2>The internet’s favourite encyclopedia</h2>
<p>In some ways, Wikipedia is both an encyclopedia and a social media platform. </p>
<p>It’s <a href="https://www.semrush.com/website/top/">the fourth most popular website</a> on the internet, behind only such giants as Google, YouTube and Facebook. </p>
<p>Every day, <a href="https://stats.wikimedia.org/#/all-projects">millions of people worldwide</a> use it for quick fact-checks or in-depth research. </p>
<p>And what happens to Wikipedia matters beyond the platform itself because of its central role in online information infrastructure. </p>
<p>Google search relies heavily on Wikipedia and the quality of its search results would <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14883/14733">decrease substantially</a> if Wikipedia disappeared. </p>
<p>But it’s not just an increasingly authoritative source of knowledge. Even though we don’t always lump Wikipedia in with other social media platforms, it shares some common features. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1724890300441895152"}"></div></p>
<p>It relies on contributors to create the content that the public will view and it creates spaces for those contributors to interact. Wikipedia relies solely on the work of volunteers: no one is paid for writing or editing content. </p>
<p>Moreover, no one checks the credentials of editors — anyone can make a contribution. This arguably makes Wikipedia the most successful collaborative project in history. </p>
<p>However, the fact that Wikipedia is a collaborative platform also makes it vulnerable. </p>
<p>A <a href="https://meta.wikimedia.org/wiki/Research:Harassment_survey_2015">2015 survey</a> found 38% of surveyed Wikipedia users had experienced harassment on the platform.</p>
<p>What if the collaborative environment deteriorates, and its volunteer editors abandon the project? </p>
<p>What effect do toxic comments have on Wikipedia’s editors, content and community?</p>
<h2>Abusive comments lead to disengaging</h2>
<p>To answer this question, we started with Wikipedia’s “user’s talk pages”. A user’s talk page is a place where other editors can interact with the user. They can post messages, discuss personal topics, or extend discussions from an article’s talk page. </p>
<p>Every editor has a personal user’s talk page, and the majority of toxic comments made on the platform are on these pages. </p>
<p>We collected information on 57 million comments made on the user’s talk pages of 8.5 million editors across the six most active language editions of Wikipedia (English, German, French, Italian, Spanish and Russian) over a period of 20 years. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/students-are-told-not-to-use-wikipedia-for-research-but-its-a-trustworthy-source-168834">Students are told not to use Wikipedia for research. But it's a trustworthy source</a>
</strong>
</em>
</p>
<hr>
<p>We then used a <a href="https://perspectiveapi.com/">state-of-the-art machine learning algorithm</a> to identify toxic comments. The algorithm looked for attributes a human might consider toxic, like insults, threats, or identity attacks.</p>
<p>We compared the activity of editors before and after they received a toxic comment, as well as with a control group of similar editors who received a non-toxic rather than toxic comment. </p>
<p>We found receiving a single toxic comment could reduce an editor’s activity by 1.2 active days in the short term. Considering that 80,307 users on English Wikipedia alone have received at least one toxic comment, the cumulative impact could amount to 284 lost human-years. </p>
<p>Moreover, some users don’t just contribute less. They stop contributing altogether. </p>
<p>We found that the probability of leaving Wikipedia’s community of contributors increases after receiving a toxic comment, with new users being particularly vulnerable. New editors who receive toxic comments are nearly twice as likely to leave Wikipedia as would be expected otherwise. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/562555/original/file-20231129-21-2li5ve.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="The wikipedia logo on a yellow office wall" src="https://images.theconversation.com/files/562555/original/file-20231129-21-2li5ve.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/562555/original/file-20231129-21-2li5ve.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/562555/original/file-20231129-21-2li5ve.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/562555/original/file-20231129-21-2li5ve.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/562555/original/file-20231129-21-2li5ve.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/562555/original/file-20231129-21-2li5ve.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/562555/original/file-20231129-21-2li5ve.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Wikipedia is just as vulnerable to toxic commentary as other popular websites.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Wikipedia_Office_Globe.jpg">Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<h2>Wide-ranging consequences</h2>
<p>This matters more than you might think to the millions who use Wikipedia. </p>
<p>First, toxicity likely leads to poorer-quality content on the site. Having a diverse editor cohort is a crucial factor for maintaining content quality. The vast majority of Wikipedian editors <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0065782">are men</a>, which is reflected in the content on the platform. </p>
<p>There are <a href="https://journals.sagepub.com/doi/full/10.1177/14614448211023772">fewer articles about women</a>, which are shorter than articles about men and more likely to centre on <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14628">romantic relationships and family-related issues</a>. They are also more often linked to articles about the opposite gender. Women are often described as wives of famous people rather than for their own merits, for example.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/30-years-of-the-web-down-under-how-australians-made-the-early-internet-their-own-212542">30 years of the web down under: how Australians made the early internet their own</a>
</strong>
</em>
</p>
<hr>
<p>While multiple barriers confront women editors on Wikipedia, toxicity is likely one of the key factors that contributes to the gender imbalance. Although men and women are <a href="https://www.datasociety.net/pubs/oh/Online_Harassment_2016.pdf">equally likely</a> to face online harassment and abuse, women experience more severe violations and are more likely to be affected by such incidents, including self-censoring. </p>
<p>This may affect other groups as well: our research showed that toxic comments often include not just gendered language but also ethnic slurs and other biases.</p>
<p>Finally, a significant rise in toxicity, especially targeted attacks on new users, could jeopardise Wikipedia’s survival. </p>
<p>Following a period of <a href="https://icwsm.org/papers/2--Almeida-Mozafari-Cho.pdf">exponential growth</a> in its editor base during the early 2000s, the number has been <a href="https://wikipedia20.mitpress.mit.edu/pub/lifecycles/release/2">largely stable</a> since 2016, with the exception of a brief activity spike during the COVID pandemic. Currently about the same number of editors join the project as leave, but the balance could be easily tipped if the people left because of online abuse.</p>
<p>That would damage not only Wikipedia, but also the rest of the online information infrastructure it helps to support. </p>
<p>There’s no easy fix to this, but our research shows promoting healthy communication practices is critical to protecting crucial online information ecosystems.</p><img src="https://counter.theconversation.com/content/218517/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ivan Smirnov does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It’s the fourth most popular website in the world, but our new study shows toxic commentary can still thrive on Wikipedia. There’s a lot at stake if too many editors are driven away.Ivan Smirnov, Research Fellow, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2182222023-11-21T23:22:34Z2023-11-21T23:22:34ZForget dystopian scenarios – AI is pervasive today, and the risks are often hidden<figure><img src="https://images.theconversation.com/files/560896/original/file-20231121-21-j31yi.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C6000%2C3979&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The AI most likely to cause you harm is not some malevolent superintelligence, but the loan algorithm at your bank.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/ConsumerBorrowing/0244f5e79b0d4039a142e681afe376a4/photo">AP Photo/Mark Humphrey</a></span></figcaption></figure><p>The turmoil at ChatGPT-maker OpenAI, bookended by the board of directors <a href="https://www.nytimes.com/2023/11/17/technology/openai-sam-altman-ousted.html">firing high-profile CEO Sam Altman</a> on Nov. 17, 2023, and <a href="https://apnews.com/article/altman-openai-chatgpt-31187f7f6eca8ff9d0eef7585aac6ace">rehiring him just four days later</a>, has put a spotlight on artificial intelligence safety and concerns about the rapid development of artificial general intelligence, or AGI. AGI is loosely defined as <a href="https://www.technologyreview.com/2023/11/16/1083498/google-deepmind-what-is-artificial-general-intelligence-agi/">human-level intelligence across a range of tasks</a>. </p>
<p>The OpenAI board stated that <a href="https://openai.com/blog/openai-announces-leadership-transition">Altman’s termination was for lack of candor</a>, but speculation has centered on a rift between Altman and members of the board over concerns that OpenAI’s remarkable growth – products such as ChatGPT and Dall-E have <a href="https://techcrunch.com/2023/11/06/openais-chatgpt-now-has-100-million-weekly-active-users/">acquired hundreds of millions of users worldwide</a> – has <a href="https://www.reuters.com/technology/sam-altmans-firing-openai-reflects-schism-over-future-ai-development-2023-11-20/">hindered the company’s ability</a> to focus on <a href="https://time.com/6337437/sam-altman-openai-fired-why-microsoft-musk/">catastrophic risks</a> posed by AGI. </p>
<p>OpenAI’s goal of developing AGI has become entwined with the idea of <a href="https://www.technologyreview.com/2023/10/26/1082398/exclusive-ilya-sutskever-openais-chief-scientist-on-his-hopes-and-fears-for-the-future-of-ai/">AI acquiring superintelligent capabilities</a> and the need to safeguard against the technology being misused or going rogue. But for now, AGI and its attendant risks are speculative. Task-specific forms of AI, meanwhile, are very real, have become widespread and often fly under the radar.</p>
<p>As a <a href="https://scholar.google.com/citations?user=JpFHYKcAAAAJ&hl">researcher of information systems and responsible AI</a>, I study how these everyday algorithms work – and how they can harm people.</p>
<h2>AI is pervasive</h2>
<p>AI plays a visible part in many people’s daily lives, from face recognition unlocking your phone to speech recognition powering your digital assistant. It also plays roles you might be vaguely aware of – for example, shaping your social media and online shopping sessions, guiding your video-watching choices and <a href="https://dl.acm.org/doi/abs/10.1145/2702123.2702548">matching you with a driver</a> in a ride-sharing service.</p>
<p>AI also affects your life in ways that might completely escape your notice. If you’re applying for a job, <a href="https://hbr.org/2016/04/how-companies-are-using-simulations-competitions-and-analytics-to-hire">many employers use AI in the hiring process</a>. Your bosses might be using it to identify employees <a href="https://www.wsj.com/articles/the-algorithm-that-tells-the-boss-who-might-quit-1426287935">who are likely to quit</a>. If you’re applying for a loan, odds are your bank is using AI to decide whether to grant it. If you’re being treated for a medical condition, your health care providers might use it to <a href="http://dx.doi.org/10.1136/medethics-2020-106820">assess your medical images</a>. And if you know someone caught up in the criminal justice system, AI could well play a role in <a href="https://epic.org/issues/ai/ai-in-the-criminal-justice-system/">determining the course of their life</a>. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/6nGM37ThEsU?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">AI has become nearly ubiquitous in the hiring process.</span></figcaption>
</figure>
<h2>Algorithmic harms</h2>
<p>Many of the AI systems that fly under the radar have biases that can cause harm. For example, machine learning methods use <a href="https://plato.stanford.edu/entries/logic-inductive/">inductive logic</a>, which starts with a set of premises, to generalize patterns from training data. A machine learning-based <a href="https://www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-against">resume screening tool was found to be biased against women</a> because the training data reflected past practices when most resumes were submitted by men. </p>
<p>The use of predictive methods in areas ranging from health care to child welfare could exhibit <a href="https://doi.org/10.1073/pnas.2301990120">biases such as cohort bias</a> that lead to unequal risk assessments across different groups in society. Even when legal practices prohibit discrimination based on attributes such as race and gender – for example, in consumer lending – <a href="https://ssrn.com/abstract=3204674">proxy discrimination can still occur</a>. This happens when algorithmic decision-making models do not use characteristics that are legally protected, such as race, and instead use characteristics that are highly correlated or connected with the legally protected characteristic, like neighborhood. Studies have found that risk-equivalent Black and Latino borrowers <a href="https://doi.org/10.1016/j.jfineco.2021.05.047">pay significantly higher interest rates</a> on government-sponsored enterprise securitized and Federal Housing Authority insured loans than white borrowers. </p>
<p>Another form of bias occurs when decision-makers use an algorithm differently from how the algorithm’s designers intended. In a well-known example, a neural network learned to <a href="https://montrealethics.ai/target-specification-bias-counterfactual-prediction-and-algorithmic-fairness-in-healthcare/">associate asthma with a lower risk of death from pneumonia</a>. This was because asthmatics with pneumonia are traditionally given more aggressive treatment that lowers their mortality risk compared to the overall population. However, if the <a href="https://doi.org/10.1145/3600211.3604678">outcome from such a neural network</a> is used in hospital bed allocation, then those with asthma and admitted with pneumonia would be dangerously deprioritized. </p>
<p>Biases from algorithms can also result from complex societal feedback loops. For example, when predicting recidivism, authorities attempt to predict which people convicted of crimes are <a href="https://doi.org/10.1126/sciadv.aao5580">likely to commit crimes again</a>. But the data used to train predictive algorithms is actually about who is likely to get re-arrested. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/IzvgEs1wPFQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Racial bias in algorithms is an ongoing problem.</span></figcaption>
</figure>
<h2>AI safety in the here and now</h2>
<p>The Biden administration’s recent <a href="https://theconversation.com/biden-administration-executive-order-tackles-ai-risks-but-lack-of-privacy-laws-limits-reach-216694">executive order</a> and <a href="https://theconversation.com/ftc-probe-of-openai-consumer-protection-is-the-opening-salvo-of-us-ai-regulation-209821">enforcement efforts by federal agencies</a> such as the Federal Trade Commission are the first steps in recognizing and safeguarding against algorithmic harms. </p>
<p>And though <a href="https://www.nvidia.com/en-us/glossary/data-science/large-language-models/">large language models</a>, such as GPT-3 that powers ChatGPT, and <a href="https://doi.org/10.2196/52865">multimodal large language models</a>, such as GPT-4, are steps on the road toward artificial general intelligence, they are also algorithms people are increasingly using in school, work and daily life. It’s important to consider the biases that result from widespread use of large language models.</p>
<p>For example, these models could exhibit biases resulting from <a href="http://proceedings.mlr.press/v139/liang21a.html">negative stereotyping involving gender, race or religion</a>, as well as biases in representation of <a href="http://dx.doi.org/10.18653/v1/2023.eacl-main.126">minorities and disabled people</a>. As these models demonstrate the ability to outperform <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4389233">humans on tests such as the bar exam</a>, I believe that they require greater scrutiny to ensure that AI-augmented work conforms to <a href="https://doi.org/10.1287/isre.2023.ed.v34.n2">standards of transparency, accuracy and source crediting</a>, and <a href="https://doi.org/10.1038/d41586-023-00288-7">that stakeholders have the authority</a> to enforce such standards.</p>
<p>Ultimately, who wins and loses from large-scale deployment of AI may not be about rogue superintelligence, but about understanding who is vulnerable when algorithmic decision-making is ubiquitous.</p><img src="https://counter.theconversation.com/content/218222/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anjana Susarla receives funding from the Omura-Saxena Professorship in Responsible AI.</span></em></p>The explosion of generative AI tools like ChatGPT and fears about where the technology might be headed distract from the many ways AI affects people every day – for better and worse.Anjana Susarla, Professor of Information Systems, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2167542023-11-19T07:54:18Z2023-11-19T07:54:18ZSouth African university students use AI to help them understand – not to avoid work<figure><img src="https://images.theconversation.com/files/557695/original/file-20231106-271094-ybb1d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Students are not adopting digital and AI-powered tools uncritically.</span> <span class="attribution"><span class="source">tzido/iStock</span></span></figcaption></figure><p>When <a href="https://openai.com/blog/chatgpt">ChatGPT</a> was released in November 2022, it sparked many conversations and moral panics. These centre on the impact of generative artificial intelligence (AI) on the <a href="https://misinforeview.hks.harvard.edu/article/misinformation-reloaded-fears-about-the-impact-of-generative-ai-on-misinformation-are-overblown/">information environment</a>. People worry that AI chatbots can negatively affect the integrity of creative and academic work, especially since they can produce human-like texts and images.</p>
<p>ChatGPT is a generative AI model using machine learning. It creates human-like responses, having been trained to recognise patterns in data. While it appears the model is engaging in natural conversation, it references a vast amount of data and extracts features and patterns to generate coherent replies.</p>
<p>Higher education is one sector in which the rise of AI like ChatGPT has <a href="https://unesdoc.unesco.org/ark:/48223/pf0000386693/PDF/386693eng.pdf.multi.%20%22">sparked concerns</a>. Some of these relate to ethics and integrity in teaching, learning and knowledge production. </p>
<p>We’re a group of academics in the field of media and communication, teaching in South African universities. We wanted to understand how university students were using generative AI and AI-powered tools in their academic practices. We administered an online survey to undergraduate students at five South African universities: the University of Cape Town, Cape Peninsula University of Technology, Stellenbosch University, Rhodes University, and the University of the Witwatersrand. </p>
<p><a href="http://dx.doi.org/10.2139/ssrn.4595655">The results</a> suggest that the moral panics around the use of generative AI are unwarranted. Students are not hyper-focused on ChatGPT. We found that students often use generative AI tools for engaged learning and that they have a critical and nuanced understanding of these tools. </p>
<p>What could be of greater concern from a teaching and learning perspective is that, second to using AI-powered tools for clarifying concepts, students are using them to generate ideas for assignments or essays or when they feel stuck on a specific topic. </p>
<h2>Unpacking the data</h2>
<p>The survey was completed by 1,471 students. Most spoke English as their home language, followed by isiXhosa and isiZulu. The majority were first-year students. Most respondents were registered in Humanities, followed by Science, Education and Commerce. While the survey is thus skewed towards first-year Humanities students, it provides useful indicative findings as educators explore new terrain. </p>
<p>We asked students whether they had used individual AI tools, listing some of the most popular tools across several categories. Our survey did not explore lecturers’ attitudes or policies towards AI tools. This will be probed in the next phase of our study, which will comprise focus groups with students and interviews with lecturers. Our study was not on ChatGPT specifically, though we did ask students about their use of this specific tool. We explored broad uses of AI-powered technologies to get a sense of how students use these tools, which tools they use, and where ChatGPT fits into these practices. </p>
<p>These were the key findings:</p>
<ul>
<li><p>41% of respondents indicated that they primarily used a laptop for their academic work, followed by a smartphone (29.8%). Only 10.5% used a desktop computer and 6.6% used a tablet.</p></li>
<li><p>Students tended to use a range of other AI-powered tools over ChatGPT, including translation and referencing tools. With reference to the use of online writing assistants such as <a href="https://quillbot.com/">Quillbot</a>, 46.5% of respondents indicated that they used such tools to improve their writing style for an assignment. 80.5% indicated that they had used <a href="https://app.grammarly.com/">Grammarly</a> or similar tools to help them write in appropriate English. </p></li>
<li><p>Fewer than half of survey respondents (37.3%) said that they had used ChatGPT to answer an essay question.</p></li>
<li><p>Students acknowledged that AI-powered tools could lead to plagiarism or affect their learning. However, they also stated that they did not use these tools in problematic ways. </p></li>
</ul>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chatgpt-is-the-push-higher-education-needs-to-rethink-assessment-200314">ChatGPT is the push higher education needs to rethink assessment</a>
</strong>
</em>
</p>
<hr>
<ul>
<li><p>Respondents were overwhelmingly positive about the potential of digital and AI tools to make it easier for them to progress through university. They indicated that these tools could help to: clarify academic concepts; formulate ideas; structure essays; improve academic writing; save time; check spelling and grammar; clarify assignment instructions; find information or academic sources; summarise academic texts; guide students for whom English is not a native language to improve their academic writing; study for a test; paraphrase better; avoid plagiarism; and reference better. </p></li>
<li><p>Most students who viewed these tools as beneficial to the learning process used tools such as ChatGPT to clarify concepts related to their studies that they could not fully grasp or that they felt were not properly explained by lecturers.</p></li>
</ul>
<h2>Engaged learning</h2>
<p>We were particularly interested to find that students often used generative AI tools for <a href="https://www.semanticscholar.org/paper/Engaged-Learning%3A-Making-Learning-an-Authentic-Hung-Tan/2c2dd8cf1d0a5a3c94c189cc98f511292a2bfc2b">engaged learning</a>. This is an educational approach in which students are accountable for their own learning. They actively create thinking and learning skills and strategies and formulate new ideas and understanding through conversations and collaborative work. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/please-do-not-assume-the-worst-of-us-students-know-ai-is-here-to-stay-and-want-unis-to-teach-them-how-to-use-it-203426">'Please do not assume the worst of us': students know AI is here to stay and want unis to teach them how to use it</a>
</strong>
</em>
</p>
<hr>
<p>Through their use of AI tools, students can tailor content to address their specific strengths and weaknesses, to have a more engaged learning experience. AI tools can also be a sort of personalised online “tutor” with whom they have “conversations” to help them understand difficult concepts.</p>
<p>Concerns about how AI tools potentially undermine academic assessment and integrity are valid. However, those working in higher education must note the importance of factoring in students’ perspectives to work towards new pathways of assessment and learning.</p>
<p><em>The <a href="https://ssrn.com/abstract=4595655">full version</a> of this article was co-authored by Marenet Jordaan, Admire Mare, Job Mwaura, Sisanda Nkoala, Alette Schoon and Alexia Smit.</em></p><img src="https://counter.theconversation.com/content/216754/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Chikezie E. Uzuegbunam receives funding from the Mellon Foundation and Rhodes University Research Council. </span></em></p><p class="fine-print"><em><span>Tanja Bosch does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Students often use generative AI tools for engaged learning. They have a critical and nuanced understanding of these tools.Tanja Bosch, Professor in Media Studies and Production, University of Cape TownChikezie E. Uzuegbunam, Lecturer & MA Programme Coordinator, Rhodes UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2169202023-11-03T15:48:02Z2023-11-03T15:48:02ZNow and Then: enabled by AI – created by profound connections between the four Beatles<blockquote>
<p>In 2023, to still be working on Beatles music … to release a new song the public haven’t heard, I think it’s an exciting thing. </p>
</blockquote>
<p>Not surprisingly, <a href="https://www.bbc.co.uk/news/entertainment-arts-67207699">Paul McCartney was positive</a> about the appearance this week of what has been trailed as the “last” Beatles song, Now and Then.</p>
<p>Much has been made of <a href="https://www.billboard.com/music/rock/paul-mccartney-ai-final-beatles-song-1235352398/">AI being part of the production</a>. Machine learning was used to <a href="https://www.npr.org/sections/world-cafe/2023/11/02/1208848690/the-beatles-last-song-now-and-then">recognise John Lennon’s voice</a>, and then isolate it from other sounds – a piano, a television in the background, electrical hum – to make it usable in a new recording. It also comes amid a slew of Beatles-related activity recently – a <a href="https://www.radiotimes.com/tv/documentaries/beatles-celebration-night-bbc-newsupdate/">new podcast series</a>, Peter Jackson’s epic 2021 documentary <a href="https://www.theguardian.com/music/2021/sep/26/beatles-final-days-get-back-let-it-be-john-harris-peter-jackson">Get Back</a>, new versions of the famed <a href="https://www.rollingstone.com/music/music-features/beatles-red-and-blue-sheffield-1234706610/">Red and Blue</a> compilation albums, and a Paul McCartney tour, during which he is playing some of the Fab Four’s back catalogue.</p>
<p>The commercial juggernaut seems unstoppable, so it’s perhaps easy to be cynical about a “new” song from a band that broke up in 1970, two of whose members are dead. Certainly, Now and Then does raise questions about how technologically mediated releases relate to collective artistic output, and what it means to be a band.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/APJAQoSCwuA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Collective creativity in bands</h2>
<p>In many ways, though, the AI label is a red herring, and this new song – which actually has its roots in a John Lennon demo tape from 1977 – demonstrates a continuing pattern. The Beatles and their narrative provided a seminal example of how bands work, and seemed to be ploughing the furrow for others. </p>
<p>From their original formation as schoolboys (Ringo joined in 1962 when they started recording), to their enormous financial success and cultural impact, the Beatles laid down templates that others have followed. <a href="https://www.liverpoolecho.co.uk/news/nostalgia/july-6-1957-day-beatles-9594637">Lennon and McCartney’s first meeting</a> at a church fete in 1957 is now the stuff of legend.</p>
<p>Their <a href="https://theconversation.com/the-beatles-revolutionised-music-by-putting-the-record-centre-stage-56103">innovations in the studio</a>, assisted by producer <a href="https://www.britannica.com/biography/George-Martin">George Martin</a>, helped to make recordings – especially albums – a central feature of the popular music experience. They emerged into professional practice together, splitting as they formed new relationships and moved onto the next phases of their life while still relatively young men.</p>
<p>Bands are simultaneously social groupings, creative units and economic entities. The economic “brand” can obviously run on for many years after the others have stopped. There is also long history of posthumous releases, including <a href="https://www.loudersound.com/features/jimi-hendrix-6-essential-posthumous-albums">Jimi Hendrix</a>, <a href="https://www.bbc.co.uk/music/reviews/6fmq/">Elliott Smith</a> and <a href="https://theconversation.com/prince-why-five-years-after-his-death-the-purple-one-still-reigns-159166">Prince</a>, even Otis Redding’s defining hit <a href="https://www.rollingstone.com/music/music-features/inside-otis-reddings-final-masterpiece-sittin-on-the-dock-of-the-bay-122170/">(Sittin’ On) The Dock of the Bay</a>. Demo recordings, unheard live performances and radio broadcasts are all established parts of artists’ catalogues.</p>
<p>This becomes complicated, though, when the act in question is a collective with deceased members whose presence on the recording is technologically facilitated. A key example is the Beatles 1995 <a href="https://ultimateclassicrock.com/beatles-free-as-a-bird/">Anthology</a> project, which saw the surviving members revisit John Lennon demos from a cassette given to McCartney by Yoko Ono, and add new parts to finish the songs.</p>
<p>This wasn’t entirely unique. Queen’s <a href="https://www.udiscovermusic.com/behind-the-albums/queen-made-in-heaven/">Made In Heaven</a>, in the same year, saw the band finish songs that Freddie Mercury worked on in the studio before he died. But it did involve resurrecting fragments of home recordings to clean them up for the commercial market.</p>
<p>The technology wasn’t sufficient at the time to properly isolate Lennon’s voice on Now and Then, so it was abandoned until Peter Jackson used machine learning to remove noise from source recordings for Get Back. By this time George Harrison had died, so this technology allowed McCartney and Starr to return to the song, incorporating Harrison’s guitar solo from the aborted 1990s attempt.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/AW55J2zE3N4?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Come together</h2>
<p>We can, then, consider the process behind this latest song in evolutionary rather than revolutionary terms. The possibilities of multi-track recording since the 1950s mean it’s long been the case that musicians have worked separately on the same song. As <a href="https://www.theneweuropean.co.uk/brexit-news-the-beatles-white-album-60s-70s-john-lennon-wider-cultural-35006/">George Harrison said of The White Album</a>:</p>
<blockquote>
<p>There was a lot more individual stuff … people were accepting that it was individual. I remember having three studios operating at the same time. Paul was doing some overdubs in one, John was in another and I was recording some horns or something in a third.</p>
</blockquote>
<p>Even when the Beatles were together, many canonical songs were the work of only one or two of them. McCartney wrote <a href="https://www.youtube.com/watch?v=wXTJBr9tt8Q">Yesterday</a> and <a href="https://www.youtube.com/watch?v=Man4Xw8Xypo">Blackbird</a> alone, and is the only Beatle who plays on them. <a href="https://www.youtube.com/watch?v=v-1OgNqBkVE">The Ballad of John and Yoko</a> didn’t feature Harrison or Starr.</p>
<p>And the former band members played on each other’s “solo” records too. There are more Beatles on Harrison’s <a href="https://www.youtube.com/watch?v=eNL40ql4CYk">All Those Years Ago</a>, or Lennon’s <a href="https://www.youtube.com/watch?v=7-SSa-D1i-M">Instant Karma</a> than on some of the band’s tracks. They all played separately on Starr’s 1973 album Ringo.</p>
<p>So Now and Then continues longstanding practices, going back to their heyday. Its status as the final Beatles song, though, reveals technological limitations. AI can create convincing facsimiles, but can’t replicate the facts of who actually played or sang the various parts, which is a central plank of what constitutes a band.</p>
<p>Audiences <a href="https://eprints.ncl.ac.uk/file_store/production/215862/EA14B274-3E9F-47EC-94FF-5B7AF6167671.pdf">ascribe authenticity</a> to music in many ways, and core among these for bands is the line-up – some acts <a href="https://theconversation.com/ac-dcs-back-in-black-at-40-establishing-rock-bands-as-brands-143473">have effectively replaced key members</a> within the brand, others have had <a href="https://www.cbsnews.com/sanfrancisco/news/7-times-when-replacing-the-lead-singer-of-a-band-did-not-work/">less success</a>. It’s often a source of debate, at least, with “<a href="https://livemusicexchange.org/blog/stoned-again-adam-behr/">classic</a>” line-ups being those that earn the audience stamp of authenticity.</p>
<p>So what of the song itself? It won’t supplant the likes of Hey Jude or Help in The Beatles’ musical pantheon. That bar, though, is high and the plangent piano-led ballad has a familiar yet distinctive arrangement, steeped in nostalagia but affecting on its own terms nevertheless. Lennon’s voice is clearer than on previous reconstructions and the harmonies sound like, well … The Beatles.</p>
<p>In that sense, what’s at the heart of this project is the presence – even spectrally – of the actual four people who made up the creative and social underpinning for the brand. The “last” Beatles song sees them demonstrating the importance, even as a coda to their recording career, of the interpersonal connections that set things in motion in the first place.</p>
<hr>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em>Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. <a href="https://theconversation.com/uk/newsletters/something-good-156">Sign up here</a>.</em></p>
<hr><img src="https://counter.theconversation.com/content/216920/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Adam Behr has received funding from the Arts and Humanities Research Council and the British Academy.</span></em></p>This new last Beatles song, enabled in part by AI, demonstrates the importance of the profound and lasting connections between the four musicians.Adam Behr, Senior Lecturer in Popular and Contemporary Music, Newcastle UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2167302023-11-01T10:10:34Z2023-11-01T10:10:34ZWe built a ‘brain’ from tiny silver wires. It learns in real time, more efficiently than computer-based AI<figure><img src="https://images.theconversation.com/files/556971/original/file-20231031-15-3i8f9x.jpg?ixlib=rb-1.1.0&rect=0%2C7%2C2556%2C1908&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://doi.org/10.1038/s41467-023-42470-5">Zhu et al. / Nature Communications</a></span></figcaption></figure><p>The world is infatuated with artificial intelligence (AI), and for good reason. AI systems can process vast quantities of data in a seemingly superhuman way.</p>
<p>However, current AI systems rely on computers running complex algorithms based on <a href="https://arxiv.org/abs/2212.11279">artificial neural networks</a>. These use <a href="https://www.numenta.com/blog/2022/05/24/ai-is-harming-our-planet/">huge amounts of energy</a>, and use even more energy if you are trying to work with data that changes in real time.</p>
<p>We are working on a completely new approach to “machine intelligence”. Instead of using artificial neural network software, we have developed a <em>physical</em> neural network in hardware that operates much more efficiently.</p>
<p>Our neural networks, made from silver nanowires, can learn on the fly to recognise handwritten numbers and memorise strings of digits. Our results are published in <a href="https://doi.org/10.1038/s41467-023-42470-5">a new paper</a> in Nature Communications, conducted with colleagues from the University of Sydney and the University of California, Los Angeles.</p>
<h2>A random network of tiny wires</h2>
<p>Using nanotechnology, we made networks of silver nanowires about one thousandth the width of a human hair. These nanowires naturally form a random network, much like the pile of sticks in a game of pick-up sticks. </p>
<p>The nanowires’ network structure looks a lot like the network of neurons in our brains. Our research is part of a field called <a href="https://www.nature.com/articles/s41928-021-00646-1">neuromorphic computing</a>, which aims to emulate the brain-like functionality of neurons and synapses in hardware. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/556993/original/file-20231101-27-46gvu1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A microscope photo showing a messy web of thin grey lines against a black background." src="https://images.theconversation.com/files/556993/original/file-20231101-27-46gvu1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/556993/original/file-20231101-27-46gvu1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/556993/original/file-20231101-27-46gvu1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/556993/original/file-20231101-27-46gvu1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/556993/original/file-20231101-27-46gvu1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/556993/original/file-20231101-27-46gvu1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/556993/original/file-20231101-27-46gvu1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Each nanowire is around one thousandth the width of a human hair, and together they form a random network that behaves much like the web of neurons in our brains.</span>
<span class="attribution"><a class="source" href="https://doi.org/10.1038/s41467-023-42470-5">Zhu et al. / Nature Communications</a></span>
</figcaption>
</figure>
<p>Our nanowire networks display brain-like behaviours in response to electrical signals. External electrical signals cause changes in how electricity is transmitted at the points where nanowires intersect, which is similar to how biological <a href="https://qbi.uq.edu.au/brain-basics/brain/brain-physiology/action-potentials-and-synapses">synapses</a> work. </p>
<p>There can be tens of thousands of synapse-like intersections in a typical nanowire network, which means the network can efficiently process and transmit information carried by electrical signals.</p>
<h2>Learning and adapting in real time</h2>
<p>In our study, we show that because nanowire networks can respond to signals that change in time, they can be used for <a href="https://medium.com/value-stream-design/online-machine-learning-515556ff72c5">online machine learning</a>. </p>
<p>In conventional machine learning, data is fed into the system and processed in <a href="https://towardsdatascience.com/batch-mini-batch-stochastic-gradient-descent-7a62ecba642a">batches</a>. In the online learning approach, we can introduce data to the system as a continuous stream in time. </p>
<p>With each new piece of data, the system learns and adapts in real time. It demonstrates “on the fly” learning, which we humans are good at but current AI systems are not. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/networks-of-silver-nanowires-seem-to-learn-and-remember-much-like-our-brains-204115">Networks of silver nanowires seem to learn and remember, much like our brains</a>
</strong>
</em>
</p>
<hr>
<p>The online learning approach enabled by our nanowire network is more efficient than conventional batch-based learning in AI applications. </p>
<p>In batch learning, a significant amount of memory is needed to process large datasets, and the system often needs to go through the same data multiple times to learn. This not only demands high computational resources but also consumes more energy overall. </p>
<p>Our online approach requires less memory as data is processed continuously. Moreover, our network learns from each data sample only once, significantly reducing energy use and making the process highly efficient.</p>
<h2>Recognising and remembering numbers</h2>
<p>We tested the nanowire network with a benchmark image recognition task using the <a href="https://paperswithcode.com/dataset/mnist">MNIST dataset</a> of handwritten digits. </p>
<p>The greyscale pixel values in the images were converted to electrical signals and fed into the network. After each digit sample, the network learned and refined its ability to recognise the patterns, displaying real-time learning.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/557006/original/file-20231101-25-pghp65.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A grid of handwritten digits" src="https://images.theconversation.com/files/557006/original/file-20231101-25-pghp65.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/557006/original/file-20231101-25-pghp65.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=352&fit=crop&dpr=1 600w, https://images.theconversation.com/files/557006/original/file-20231101-25-pghp65.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=352&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/557006/original/file-20231101-25-pghp65.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=352&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/557006/original/file-20231101-25-pghp65.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=443&fit=crop&dpr=1 754w, https://images.theconversation.com/files/557006/original/file-20231101-25-pghp65.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=443&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/557006/original/file-20231101-25-pghp65.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=443&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The nanowire network learned to recognise handwritten numbers, a common benchmark for machine learning systems.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/MNIST_database#/media/File:MnistExamplesModified.png">NIST / Wikimedia</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Using the same learning method, we also tested the nanowire network with a memory task involving patterns of digits, much like the process of remembering a phone number. The network demonstrated an ability to remember previous digits in the pattern. </p>
<p>Overall, these tasks demonstrate the network’s potential for emulating brain-like learning and memory. Our work has so far only scratched the surface of what neuromorphic nanowire networks can do.</p><img src="https://counter.theconversation.com/content/216730/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Zdenka Kuncic owns shares in Emergentia, Inc., and acknowledges support from the Australian-American Fulbright Commission.</span></em></p><p class="fine-print"><em><span>Ruomin Zhu receives the PREA scholarship from the University of Sydney. </span></em></p>A tangle of silver nanowires may pave the way to low-energy real-time machine learning.Zdenka Kuncic, Professor of Physics, University of SydneyRuomin Zhu, PhD student, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2147212023-10-16T19:05:07Z2023-10-16T19:05:07ZAI is closer than ever to passing the Turing test for ‘intelligence’. What happens when it does?<figure><img src="https://images.theconversation.com/files/553931/original/file-20231016-17-wzq8rn.jpg?ixlib=rb-1.1.0&rect=77%2C113%2C3916%2C3880&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Pexels/Google Deepmind</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>In 1950, British computer scientist Alan Turing proposed an experimental method for answering the question: can machines think? He suggested if a human couldn’t tell whether they were speaking to an artificially intelligent (AI) machine or another human after five minutes of questioning, this would demonstrate AI has human-like intelligence.</p>
<p>Although AI systems remained far from passing Turing’s test during his lifetime, he speculated that</p>
<blockquote>
<p>“[…] in about fifty years’ time it will be possible to programme computers […] to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning.</p>
</blockquote>
<p>Today, more than 70 years after Turing’s proposal, no AI has managed to successfully pass the test by fulfilling the specific conditions he outlined. Nonetheless, as <a href="https://www.nature.com/articles/d41586-023-02361-7">some headlines</a> <a href="https://www.washingtonpost.com/technology/2022/06/17/google-ai-lamda-turing-test/">reflect</a>, a few systems have come quite close.</p>
<p><a href="https://browse.arxiv.org/pdf/2305.20010.pdf">One recent experiment</a> tested three large language models, including GPT-4 (the AI technology behind ChatGPT). The participants spent two minutes chatting with either another person or an AI system. The AI was prompted to make small spelling mistakes – and quit if the tester became too aggressive. </p>
<p>With this prompting, the AI did a good job of fooling the testers. When paired with an AI bot, testers could only correctly guess whether they were talking to an AI system 60% of the time. </p>
<p>Given the rapid progress achieved in the design of natural language processing systems, we may see AI pass Turing’s original test within the next few years. </p>
<p>But is imitating humans really an effective test for intelligence? And if not, what are some alternative benchmarks we might use to measure AI’s capabilities?</p>
<h2>Limitations of the Turing test</h2>
<p>While a system passing the Turing test gives us <em>some</em> evidence it is intelligent, this test is not a decisive test of intelligence. One problem is it can produce "false negatives”. </p>
<p>Today’s large language models are often designed to immediately declare they are not human. For example, when you ask ChatGPT a question, it often prefaces its answer with the phrase “as an AI language model”. Even if AI systems have the underlying ability to pass the Turing test, this kind of programming would override that ability.</p>
<p>The test also risks certain kinds of “false positives”. As philosopher Ned Block <a href="https://www.jstor.org/stable/2184371">pointed out</a> in a 1981 article, a system could conceivably pass the Turing test simply by being hard-coded with a human-like response to any possible input.</p>
<p>Beyond that, the Turing test focuses on human cognition in particular. If AI cognition differs from human cognition, an expert interrogator will be able to find some task where AIs and humans differ in performance.</p>
<p>Regarding this problem, Turing wrote:</p>
<blockquote>
<p>This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.</p>
</blockquote>
<p>In other words, while passing the Turing test is good evidence a system is intelligent, failing it is not good evidence a system is <em>not</em> intelligent.</p>
<p>Moreover, the test is not a good measure of whether AIs are conscious, whether they can feel pain and pleasure, or whether they have moral significance. According to many cognitive scientists, consciousness involves a particular cluster of mental abilities, including having a working memory, higher-order thoughts, and the ability to perceive one’s environment and model how one’s body moves around it.</p>
<p>The Turing test does not answer the question of whether or not AI systems <a href="https://arxiv.org/abs/2308.08708">have these abilities</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-pioneer-geoffrey-hinton-says-ai-is-a-new-form-of-intelligence-unlike-our-own-have-we-been-getting-it-wrong-this-whole-time-204911">AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time?</a>
</strong>
</em>
</p>
<hr>
<h2>AI’s growing capabilities</h2>
<p>The Turing test is based on a certain logic. That is: humans are intelligent, so anything that can effectively imitate humans is likely to be intelligent.</p>
<p>But this idea doesn’t tell us anything about the nature of intelligence. A different way to measure AI’s intelligence involves thinking more critically about what intelligence is. </p>
<p>There is currently no single test that can authoritatively measure artificial or human intelligence. </p>
<p>At the broadest level, we can think of intelligence as the <a href="https://arxiv.org/pdf/2303.12712.pdf">ability</a> to achieve a range of goals in different environments. More intelligent systems are those which can achieve a wider range of goals in a wider range of environments. </p>
<p>As such, the best way to keep track of advances in the design of general-purpose AI systems is to assess their performance across a variety of tasks. Machine learning researchers have developed a range of benchmarks that do this.</p>
<p>For example, GPT-4 was <a href="https://openai.com/research/gpt-4">able to correctly answer</a> 86% of questions in massive multitask language understanding – a benchmark measuring performance on multiple choice tests across a range of college-level academic subjects. </p>
<p>It also scored favourably in <a href="https://arxiv.org/pdf/2308.03688.pdf">AgentBench</a>, a tool that can measure a large language model’s ability to behave as an agent by, for example, browsing the web, buying products online and competing in games.</p>
<h2>Is the Turing test still relevant?</h2>
<p>The Turing test is a measure of imitation – of AI’s ability to simulate the human behaviour. Large language models are expert imitators, which is now being reflected in their potential to pass the Turing test. But intelligence is not the same as imitation.</p>
<p>There are as many types of intelligence as there are goals to achieve. The best way to understand AI’s intelligence is to monitor its progress in developing a range of important capabilities.</p>
<p>At the same time, it’s important we don’t keep “changing the goalposts” when it comes to the question of whether AI is intelligent. Since AI’s capabilities are rapidly improving, critics of the idea of AI intelligence are constantly finding new tasks AI systems may struggle to complete – only to find they have jumped over <a href="https://www.newyorker.com/culture/annals-of-inquiry/the-mechanical-muse">yet another hurdle</a>. </p>
<p>In this setting, the relevant question isn’t whether AI systems are intelligent — but more precisely, what <em>kinds</em> of intelligence they may have.</p><img src="https://counter.theconversation.com/content/214721/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The Turing test, first proposed in 1950 by Alan Turing, was framed as a test that could supposedly tell us whether an AI system could ‘think’ like a human.Simon Goldstein, Associate Professor, Dianoia Institute of Philosophy, Australian Catholic University, Australian Catholic UniversityCameron Domenico Kirk-Giannini, Assistant Professor of Philosophy, Rutgers UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2123032023-09-27T13:55:54Z2023-09-27T13:55:54ZLagos building collapses: we used machine learning to show where and why they happen<p>Building collapses have become a major <a href="https://estateintel.com/lagos-state-has-seen-an-alarming-rate-of-1-building-collapse-every-two-months-in-the-last-6-months">menace</a> in Lagos, Nigeria. Lagos is the business hub of the country and has its largest seaport and airport. With an estimated population of <a href="https://www.statista.com/statistics/1308467/population-of-lagos-nigeria/">15.4 million</a>, it is the largest city in sub-Saharan Africa and the second largest in Africa after Cairo.</p>
<p>The city has two distinct geographical areas: <a href="https://lagosstate.gov.ng/about-lagos/#:%7E:text=It%20consists%20of%20five%20Local,with%20the%20City%20of%20Lagos">Lagos Island</a> and Lagos Mainland, connected by <a href="https://www.ice.org.uk/what-is-civil-engineering/what-do-civil-engineers-do/third-mainland-bridge-lagos">three bridges</a>. Lagos Island is the <a href="https://www.youtube.com/watch?v=UIcPNQydUG0">historical nucleus</a> of the city. This area is renowned for its eclectic mix of architectural styles, a blend of modern skyscrapers, remnants of colonial-era structures and bustling traditional markets. It serves as the <a href="https://lagosstate.gov.ng/about-lagos/#:%7E:text=It%20consists%20of%20five%20Local,with%20the%20City%20of%20Lagos">centre of the city’s financial, entertainment and corporate activities</a>. Ikoyi, Victoria Island and Lekki are popularly regarded as an extension of Lagos Island.</p>
<p>Lagos Mainland has residential areas, markets and industrial zones. </p>
<p>There have been numerous <a href="https://theconversation.com/why-buildings-keep-collapsing-in-lagos-and-what-can-be-done-about-it-113928">building collapses</a> in both areas. </p>
<p>Using machine learning techniques, we built a <a href="https://www.tandfonline.com/doi/full/10.1080/15623599.2023.2222966">model</a> that ranked the factors affecting building construction collapses in order of relevance. We also modelled the number of casualties by location. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/knowing-what-leads-to-building-collapses-can-help-make-african-cities-safer-118423">Knowing what leads to building collapses can help make African cities safer</a>
</strong>
</em>
</p>
<hr>
<p>The study classified causes of building collapses into human factors, natural disasters and unspecified causes. Human factors included sub-standard material, structural defects, onsite changes of plan, bad supervision, demolition processes, non-adherence to building standards and regulations, lack of geotechnical information, poor maintenance, construction defects and overload. </p>
<p>Based on our results we made two findings.</p>
<p>First, location was the most relevant factor contributing to building collapses in Lagos. We found that more buildings collapsed on the island than on the mainland. </p>
<p>Second, building collapses on the mainland had a higher number of casualties than those on the island. </p>
<p>Based on our findings, we recommended proper onsite geotechnical inspection before the start of construction in both locations.</p>
<h2>Building the model</h2>
<p>Our study showcased the applicability of supervised machine learning models for a range of purposes. Supervised machine learning models are algorithms that learn from labelled data, where the input (features) and corresponding desired output (labels or targets) are provided. These models are trained to recognise patterns and relationships in the data, allowing them to make predictions or classifications on new, unseen data. </p>
<p>Our study provided a comprehensive analysis of building collapse statistics in Lagos from 2000 to 2021. The buildings ranged from bungalows to multi-storey buildings and skyscrapers.</p>
<p>On average, <a href="https://estateintel.com/lagos-state-has-seen-an-alarming-rate-of-1-building-collapse-every-two-months-in-the-last-6-months">four buildings collapse</a> each year, resulting in approximately 31 casualties annually. </p>
<p>The highest number of collapses occurred in 2011, with 10 buildings involved, followed by 2000 and 2006, with nine each. The peak casualty count, 140, occurred in 2014. It was concentrated in the Ikotun-Egbe area of the Lagos mainland.</p>
<h2>The differences</h2>
<p>Our model suggested that the higher number of collapses on the island was due to the soil there. The island soil’s geotechnical properties give it poorer capacity to bear building loads. </p>
<p>We identified three factors for the higher number of deaths from building collapses on the mainland:</p>
<ul>
<li><p>Many landowners in the mainland area ignored soil tests because they assumed it was safe to build there, given the area’s reputation of having soil that could bear heavier building loads.</p></li>
<li><p>The height of the building. </p></li>
<li><p>The quality of materials used. </p></li>
</ul>
<h2>To prevent future collapses and casualties</h2>
<p>Our study emphasised the importance of understanding the causes of building collapses in Lagos, and the potential of machine learning algorithms for prediction.</p>
<p>We made a number of recommendations.</p>
<p>First, that it is important to carry out basic soil investigation using the right professionals and building engineers to ascertain the geological properties or bearing capacity of the soil.</p>
<p>This information would clearly identify the type of building that the soil can support.</p>
<p>Second, assigning the right job to the right professional is paramount. For instance, the job of a civil engineer should not be assigned to an architect. </p>
<p>Third, eradication of substandard materials is key to a durable structure. </p>
<p>Fourth, many property owners add extra floors and extensions to maximise profit. Yet the higher the building, the deeper the foundation. Geotechnical properties of the soil will determine the choice and quality of the foundation. In addition, location should determine the choice of a building foundation.</p>
<p>Last, there should be policies in place to enhance proper onsite geotechnical inspection. </p>
<p>We also recommend the use of machine learning for predicting building collapses.</p><img src="https://counter.theconversation.com/content/212303/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Olushina Olawale Awe receives funding from FAPESP Brazil. He is affiliated with Statistics Learning Laboratory, UFBA as a research team leader. </span></em></p><p class="fine-print"><em><span>Emmanuel Oluwaseyi Atofarati does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>An AI model shows that building collapses in Lagos are location specific, and soil testing can help to check them.Olushina Olawale Awe, Professor of Statistics, Universidade Federal da Bahia (UFBA)Emmanuel Oluwaseyi Atofarati, PhD Candidate, University of PretoriaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2111622023-09-21T12:44:28Z2023-09-21T12:44:28ZNASA’s Mars rovers could inspire a more ethical future for AI<figure><img src="https://images.theconversation.com/files/547617/original/file-20230911-8058-meu5mp.jpg?ixlib=rb-1.1.0&rect=12%2C0%2C2105%2C1409&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Rather than using AI to replace workers, companies can build teams that ethically integrate the technology.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/robot-finger-touching-to-human-finger-royalty-free-image/1182764551?phrase=person+and+robot&adppopup=true">Yuichiro Chino/Moment via Getty Images</a></span></figcaption></figure><p>Since ChatGPT’s release in late 2022, many news outlets have reported on the ethical threats posed by artificial intelligence. Tech pundits have issued warnings of killer robots bent on <a href="https://www.theverge.com/2023/5/30/23742005/ai-risk-warning-22-word-statement-google-deepmind-openai">human extinction</a>, while the World Economic Forum predicted that machines <a href="https://www.weforum.org/reports/the-future-of-jobs-report-2020">will take away jobs</a>. </p>
<p>The tech sector is <a href="https://www.computerworld.com/article/3685936/tech-layoffs-in-2023-a-timeline.html">slashing its workforce</a> even as it <a href="https://www.forbes.com/advisor/business/software/ai-in-business/">invests in AI-enhanced productivity tools</a>. Writers and actors in Hollywood <a href="https://theconversation.com/actors-are-demanding-that-hollywood-catch-up-with-technological-changes-in-a-sequel-to-a-1960-strike-209829">are on strike</a> to protect <a href="https://www.theguardian.com/technology/2023/jul/22/sag-aftra-wga-strike-artificial-intelligence">their jobs and their likenesses</a>. And scholars continue to show how these systems <a href="https://www.rollingstone.com/culture/culture-features/women-warnings-ai-danger-risk-before-chatgpt-1234804367/">heighten existing biases</a> or create meaningless jobs – amid myriad other problems.</p>
<p>There is a better way to bring artificial intelligence into workplaces. I know, because I’ve seen it, <a href="https://janet.vertesi.com">as a sociologist</a> who works with NASA’s robotic spacecraft teams. </p>
<p>The scientists and engineers I study are busy exploring <a href="https://mars.jpl.nasa.gov">the surface of Mars</a> with the help of AI-equipped rovers. But their job is no science fiction fantasy. It’s an example of the power of weaving machine and human intelligence together, in service of a common goal.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&rect=14%2C7%2C4977%2C2799&q=45&auto=format&w=1000&fit=clip"><img alt="An artist's rendition of the Perseverence rover, make of metal with six small wheels, a camera and a robotic arm." src="https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&rect=14%2C7%2C4977%2C2799&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Mars rovers act as an important part of NASA’s team, even while operating millions of miles away from their scientist teammates.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/MarsLanding/c835b14b3e6645d7a0cd46558745752b/photo?Query=mars%20rover&mediaType=photo&sortBy=&dateRange=Anytime&totalCount=530&currentItemNo=11&vs=true">NASA/JPL-Caltech via AP</a></span>
</figcaption>
</figure>
<p>Instead of replacing humans, these robots partner with us to extend and complement human qualities. Along the way, they avoid common ethical pitfalls and chart a humane path for working with AI.</p>
<h2>The replacement myth in AI</h2>
<p>Stories of killer robots and job losses illustrate how a “replacement myth” dominates the way people think about AI. In this view, humans can and will be <a href="https://ntrs.nasa.gov/citations/19940022856">replaced by automated machines</a>. </p>
<p>Amid the existential threat is the promise of business boons <a href="https://hbr.org/sponsored/2023/04/how-automation-drives-business-growth-and-efficiency">like greater efficiency</a>, <a href="https://www.forbes.com/sites/waldleventhal/2017/08/03/how-automation-could-save-your-business-4-million-annually/?sh=691f5edc3807">improved profit margins</a> and <a href="https://www.aspeninstitute.org/wp-content/uploads/files/content/upload/Intro_and_Section_I.pdf">more leisure time</a>.</p>
<p>Empirical evidence shows that automation does not cut costs. Instead, it increases inequality by <a href="https://doi.org/10.1257/pandp.20201063">cutting out low-status workers</a> and <a href="https://www.jstor.org/stable/2118494">increasing the salary cost</a> for high-status workers who remain. Meanwhile, today’s productivity tools inspire employees to <a href="https://press.uchicago.edu/ucp/books/book/chicago/P/bo19085612.html">work more</a> for their employers, not less.</p>
<p>Alternatives to straight-out replacement are “mixed autonomy” systems, where people and robots work together. For example, <a href="https://doi.org/10.1109/TRO.2021.3087314">self-driving cars must be programmed</a> to operate in traffic alongside human drivers. Autonomy is “mixed” because both humans and robots operate in the same system, and their actions influence each other.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A zoomed in shot of a white car with a bumper sticker reading 'self-driving car'" src="https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=446&fit=crop&dpr=1 600w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=446&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=446&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=560&fit=crop&dpr=1 754w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=560&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=560&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Self-driving cars, while operating without human intervention, still require training from human engineers and data collected by humans.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/GoogleCars/b10293841f2a474eaadb0b408277e360/photo?Query=self%20driving%20cars&mediaType=photo&sortBy=&dateRange=Anytime&totalCount=483&currentItemNo=1&vs=true">AP Photo/Tony Avelar</a></span>
</figcaption>
</figure>
<p>However, mixed autonomy is often seen as a step <a href="https://doi.org/10.6092/issn.1971-8853/11657">along the way to replacement</a>. And it can lead to systems where humans merely <a href="https://www.prospectmagazine.co.uk/ideas/technology/62810/ai-artificial-intelligence-trains-itself-zuckerman">feed, curate or teach AI tools</a>. This saddles humans with “<a href="https://ghostwork.info/">ghost work</a>” – mindless, piecemeal tasks that programmers hope machine learning will soon render obsolete.</p>
<p>Replacement raises red flags for AI ethics. Work like <a href="https://www.bbc.com/news/av/world-africa-66514287">tagging content to train AI</a> or <a href="https://ir.lib.uwo.ca/cgi/viewcontent.cgi?article=1012&context=commpub">scrubbing Facebook posts</a> typically features <a href="https://hbr.org/2022/11/content-moderation-is-terrible-by-design">traumatic tasks</a> and <a href="https://dl.acm.org/doi/10.1145/3173574.3174023">a poorly paid workforce</a> <a href="https://dl.acm.org/doi/10.1145/3555561">spread across</a> <a href="https://giswatch.org/node/6202">the Global South</a>. And legions of autonomous vehicle designers are obsessed with “<a href="https://www.moralmachine.net/">the trolley problem</a>” – determining when or whether it is ethical to run over pedestrians. </p>
<p>But my research <a href="https://press.uchicago.edu/ucp/books/book/chicago/S/bo18295743.html">with robotic spacecraft teams at NASA</a> shows that when companies reject the replacement myth and opt for building human-robot teams instead, many of the ethical issues with AI vanish.</p>
<h2>Extending rather than replacing</h2>
<p><a href="https://doi.org/10.1007/978-3-030-62056-1_21">Strong human-robot teams</a> work best when they <a href="https://digitalreality.ieee.org/publications/what-is-augmented-intelligence">extend and augment</a> human capabilities instead of replacing them. Engineers craft machines that can do work that humans cannot. Then, they weave machine and human labor together intelligently, <a href="https://doi.org/10.2514/6.2004-6434">working toward a shared goal</a>.</p>
<p>Often, this teamwork means sending robots to do jobs that are physically dangerous for humans. <a href="https://www.popsci.com/technology/navy-robotic-minesweeper-cleared-for-deployment/">Minesweeping</a>, <a href="https://theconversation.com/an-expert-on-search-and-rescue-robots-explains-the-technologies-used-in-disasters-like-the-florida-condo-collapse-163564">search-and-rescue</a>, <a href="https://ntrs.nasa.gov/citations/20170010160">spacewalks</a> and <a href="https://news.stanford.edu/2022/07/20/oceanonek-connects-humans-sight-touch-deep-sea/">deep-sea</a> robots are all real-world examples. </p>
<p>Teamwork also means leveraging the combined strengths of <a href="https://doi.org/10.1145/3022198.3022659">both robotic and human senses or intelligences</a>. After all, there are many capabilities that robots have that humans do not – and vice versa.</p>
<p>For instance, human eyes on Mars can only see dimly lit, dusty red terrain stretching to the horizon. So engineers outfit Mars rovers <a href="https://mars.nasa.gov/mars2020/spacecraft/rover/cameras/">with camera filters</a> to “see” wavelengths of light that humans can’t see in the infrared, returning pictures in brilliant <a href="http://pancam.sese.asu.edu/projects_5.html">false colors</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A false-color photo from the point of view of a rover standing at the cliff overlooking a brown, sandy desert-like area that looks blue in the distance." src="https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=148&fit=crop&dpr=1 600w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=148&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=148&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=186&fit=crop&dpr=1 754w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=186&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=186&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Mars rovers capture images in near infrared to show what Martian soil is made of.</span>
<span class="attribution"><a class="source" href="https://mars.nasa.gov/resources/6934/high-martian-viewpoint-for-11-year-old-rover-false-color-landscape/">NASA/JPL-Caltech/Cornell Univ./Arizona State Univ</a></span>
</figcaption>
</figure>
<p>Meanwhile, the rovers’ onboard AI cannot generate scientific findings. It is only by combining colorful sensor results with expert discussion that scientists can use these robotic eyes to <a href="https://press.uchicago.edu/ucp/books/book/chicago/S/bo18295743.html">uncover new truths about Mars</a>.</p>
<h2>Respectful data</h2>
<p>Another ethical challenge to AI is how data is harvested and used. Generative AI is trained on artists’ and writers’ work <a href="https://theconversation.com/generative-ai-is-a-minefield-for-copyright-law-207473">without their consent</a>, commercial datasets are <a href="https://nyupress.org/9781479837243/algorithms-of-oppression/">rife with bias</a>, and <a href="https://www.cnn.com/2023/08/29/tech/ai-chatbot-hallucinations/index.html">ChatGPT “hallucinates”</a> answers to questions.</p>
<p>The real-world consequences of this data use in AI range from <a href="https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart">lawsuits</a> to <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">racial profiling</a>.</p>
<p>Robots on Mars also rely on data, processing power and machine learning techniques to do their jobs. But the data they need is visual and distance information to <a href="https://www.nasa.gov/feature/jpl/nasa-s-self-driving-perseverance-mars-rover-takes-the-wheel">generate driveable pathways</a> or <a href="https://mars.nasa.gov/resources/26782/perseverances-supercam-uses-aegis-for-the-first-time/">suggest cool new images</a>.</p>
<p>By focusing on the world around them instead of our social worlds, these robotic systems avoid the <a href="https://doi.org/10.1007/s43681-022-00196-y">questions around surveillance</a>, <a href="https://doi.org/10.1073/pnas.1700035114">bias</a> <a href="https://haveibeentrained.com/">and exploitation</a> that plague today’s AI.</p>
<h2>The ethics of care</h2>
<p>Robots can <a href="http://shapingscience.net/">unite the groups</a> that work with them by eliciting human emotions when integrated seamlessly. For example, seasoned soldiers <a href="https://www.washington.edu/news/2013/09/17/emotional-attachment-to-robots-could-affect-outcome-on-battlefield/">mourn broken drones on the battlefield</a>, and families give names and personalities <a href="https://faculty.cc.gatech.edu/%7Ebeki/c35.pdf">to their Roombas</a>. </p>
<p>I saw NASA engineers <a href="https://press.uchicago.edu/ucp/books/book/chicago/S/bo18295743.html">break down in anxious tears</a> when the rovers Spirit and Opportunity were threatened by Martian dust storms.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A hand petting a light blue, circular Roomba vacuum." src="https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Some people feel a connection to their robot vacuums, similar to the connection NASA engineers feel to Mars rovers.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/hand-petting-a-robot-vacuum-cleaner-royalty-free-image/1134449246?phrase=roomba&adppopup=true">nikolay100/iStock / Getty Images Plus via Getty Images</a></span>
</figcaption>
</figure>
<p>Unlike <a href="https://www.britannica.com/topic/anthropomorphism">anthropomorphism</a> – projecting human characteristics onto a machine – this feeling is born from a sense of care for the machine. It is developed through daily interactions, mutual accomplishments and shared responsibility. </p>
<p>When machines inspire a sense of care, they can underline – not undermine – the qualities that make people human.</p>
<h2>A better AI is possible</h2>
<p>In industries where AI could be used to replace workers, technology experts might consider how clever human-machine partnerships could enhance human capabilities instead of detracting from them. </p>
<p>Script-writing teams may appreciate an artificial agent that can look up dialog or cross-reference on the fly. Artists could write or curate their own algorithms <a href="https://computerhistory.org/blog/harold-cohen-and-aaron-a-40-year-collaboration/">to fuel creativity</a> and retain credit for their work. Bots to support software teams might improve meeting communication and find errors that emerge from compiling code.</p>
<p>Of course, rejecting replacement does not <a href="https://www.cambridge.org/us/universitypress/subjects/law/humanitarian-law/autonomous-weapons-systems-law-ethics-policy?format=PB">eliminate all ethical concerns</a> with AI. But many problems associated with human livelihood, agency and bias shift when replacement is no longer the goal.</p>
<p>The replacement fantasy is just one of many possible futures for AI and society. After all, no one would watch “Star Wars” if the ‘droids replaced all the protagonists. For a more ethical vision of humans’ future with AI, you can look to the human-machine teams that are already alive and well, in space and on Earth.</p><img src="https://counter.theconversation.com/content/211162/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Janet Vertesi has consulted for NASA teams. She receives funding from the National Science Foundation.</span></em></p>AI poses a variety of ethical conundrums, but the NASA teams working on Mars rovers exemplify an ethic of care and human-robot teamwork that could act as a blueprint for AI’s future.Janet Vertesi, Associate Professor of Sociology, Princeton UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2131152023-09-13T12:29:28Z2023-09-13T12:29:28ZWhy humans can’t trust AI: You don’t know how it works, what it’s going to do or whether it’ll serve your interests<figure><img src="https://images.theconversation.com/files/547875/original/file-20230912-25-g536dq.jpg?ixlib=rb-1.1.0&rect=26%2C0%2C2936%2C1942&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Do you trust AI systems, like this driverless taxi, to behave the way you expect them to?</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/WaymoRobotaxiExpansion/c339a6a2c683436fbb4910ffd7927e55/photo">AP Photo/Terry Chea</a></span></figcaption></figure><p>There are alien minds among us. Not the little green men of science fiction, but the alien minds that power the facial recognition in your smartphone, <a href="https://doi.org/10.1080/23322039.2021.2023262">determine your creditworthiness</a> and write <a href="https://www.npr.org/2022/12/10/1142045405/opinion-machine-made-poetry-is-here">poetry</a> and <a href="https://www.wired.com/story/ai-latest-trick-writing-computer-code">computer code</a>. These alien minds are artificial intelligence systems, the ghost in the machine that you encounter daily. </p>
<p>But AI systems have a significant limitation: Many of their inner workings are <a href="https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained">impenetrable, making them fundamentally unexplainable</a> and unpredictable. Furthermore, constructing AI systems that behave in ways that people expect is a significant challenge. </p>
<p>If you fundamentally don’t understand something as unpredictable as AI, how can you trust it?</p>
<h2>Why AI is unpredictable</h2>
<p><a href="https://plato.stanford.edu/entries/trust/">Trust</a> is grounded in predictability. It depends on your ability to anticipate the behavior of others. If you trust someone and they don’t do what you expect, then your perception of their trustworthiness diminishes. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/547368/original/file-20230910-116122-4yelon.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A diagram with three columns of dots, two on the left, four in the center and one on the right, with arrows connecting the dots from left to right" src="https://images.theconversation.com/files/547368/original/file-20230910-116122-4yelon.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/547368/original/file-20230910-116122-4yelon.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=821&fit=crop&dpr=1 600w, https://images.theconversation.com/files/547368/original/file-20230910-116122-4yelon.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=821&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/547368/original/file-20230910-116122-4yelon.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=821&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/547368/original/file-20230910-116122-4yelon.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1032&fit=crop&dpr=1 754w, https://images.theconversation.com/files/547368/original/file-20230910-116122-4yelon.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1032&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/547368/original/file-20230910-116122-4yelon.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1032&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">In neural networks, the strength of the connections between ‘neurons’ changes as data passes from the input layer through hidden layers to the output layer, enabling the network to ‘learn’ patterns.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Neural_network_example.svg">Wiso via Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>Many AI systems are built on <a href="https://www.mathworks.com/discovery/deep-learning.html">deep learning</a> <a href="https://theconversation.com/what-is-a-neural-network-a-computer-scientist-explains-151897">neural networks</a>, which in some ways emulate the human brain. These networks contain interconnected “neurons” with variables or “parameters” that affect the strength of connections between the neurons. As a naïve network is presented with training data, it <a href="https://towardsdatascience.com/how-do-we-train-neural-networks-edd985562b73">“learns” how to classify the data</a> by adjusting these parameters. In this way, the AI system learns to classify data it hasn’t seen before. It doesn’t memorize what each data point is, but instead predicts what a data point might be. </p>
<p>Many of the most powerful AI systems contain <a href="https://the-decoder.com/gpt-4-has-a-trillion-parameters/">trillions of parameters</a>. Because of this, the reasons AI systems make the decisions that they do are often opaque. This is the <a href="https://www.nist.gov/artificial-intelligence/ai-fundamental-research-explainability">AI explainability problem</a> – the impenetrable <a href="https://theconversation.com/what-is-a-black-box-a-computer-scientist-explains-what-it-means-when-the-inner-workings-of-ais-are-hidden-203888">black box</a> of AI decision-making.</p>
<p>Consider a variation of the <a href="https://plato.stanford.edu/entries/doing-allowing/#TrolProb">“Trolley Problem</a>.” Imagine that you are a passenger in a self-driving vehicle, controlled by an AI. A small child runs into the road, and the AI must now decide: run over the child or swerve and crash, potentially injuring its passengers. This choice would be difficult for a human to make, but a human has the benefit of being able to explain their decision. Their rationalization – shaped by ethical norms, the perceptions of others and expected behavior – supports trust. </p>
<p>In contrast, an AI can’t rationalize its decision-making. You can’t look under the hood of the self-driving vehicle at its trillions of parameters to explain why it made the decision that it did. AI fails the predictive requirement for trust.</p>
<h2>AI behavior and human expectations</h2>
<p>Trust relies not only on predictability, but also on <a href="https://plato.stanford.edu/entries/ethics-virtue/">normative or ethical</a> motivations. You typically expect people to act not only as you assume they will, but also as they should. Human values are influenced by common experience, and moral reasoning is a <a href="https://doi.org/10.1007/s43681-022-00217-w">dynamic process</a>, shaped by ethical standards and others’ perceptions. </p>
<p>Unlike humans, AI doesn’t adjust its behavior based on how it is perceived by others or by adhering to ethical norms. AI’s internal representation of the world is largely static, set by its training data. Its decision-making process is grounded in an unchanging model of the world, unfazed by the dynamic, nuanced social interactions constantly influencing human behavior. Researchers are working on programming AI to include ethics, but that’s <a href="https://www.nytimes.com/2021/11/19/technology/can-a-machine-learn-morality.html">proving challenging</a>.</p>
<p>The self-driving car scenario illustrates this issue. How can you ensure that the car’s AI makes decisions that align with human expectations? For example, the car could decide that hitting the child is the optimal course of action, something most human drivers would instinctively avoid. This issue is the <a href="https://intelligence.org/stanford-talk/">AI alignment problem</a>, and it’s another source of uncertainty that erects barriers to trust. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/WvmeTaFc_Qw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">AI expert Stuart Russell explains the AI alignment problem.</span></figcaption>
</figure>
<h2>Critical systems and trusting AI</h2>
<p>One way to reduce uncertainty and boost trust is to ensure people are in on the decisions AI systems make. This is the <a href="https://media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/Department-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation-Pathway.PDF">approach taken by the U.S. Department of Defense</a>, which requires that for all AI decision-making, a human must be either in the loop or <a href="https://www.japcc.org/essays/human-on-the-loop/">on the loop</a>. In the loop means the AI system makes a recommendation but a human is required to initiate an action. On the loop means that while an AI system can initiate an action on its own, a human monitor can interrupt or alter it.</p>
<p>While keeping humans involved is a great first step, I am not convinced that this will be sustainable long term. As companies and governments continue to adopt AI, the future will likely include nested AI systems, where rapid decision-making limits the opportunities for people to intervene. It is important to resolve the explainability and alignment issues before the critical point is reached where human intervention becomes impossible. At that point, there will be no option other than to trust AI.</p>
<p>Avoiding that threshold is especially important because AI is increasingly being integrated into <a href="https://www.hstoday.us/featured/artificial-intelligence-critical-systems-and-the-control-problem/">critical systems</a>, which include things such as electric grids, the internet and <a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/perspective-why-strong-artificial-intelligence-weapons-should-be-considered-wmd/">military systems</a>. In critical systems, trust is paramount, and undesirable behavior could have deadly consequences. As AI integration becomes more complex, it becomes even more important to resolve issues that limit trustworthiness.</p>
<h2>Can people ever trust AI?</h2>
<p>AI is alien – an intelligent system into which people have little insight. Humans are largely predictable to other humans because we share the same human experience, but this doesn’t extend to artificial intelligence, even though humans created it. </p>
<p>If trustworthiness has inherently predictable and normative elements, AI fundamentally lacks the qualities that would make it worthy of trust. More research in this area will hopefully shed light on this issue, ensuring that AI systems of the future are worthy of our trust.</p><img src="https://counter.theconversation.com/content/213115/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mark Bailey is affiliated with the Office of the Director of National Intelligence as a federal employee at National Intelligence University. He is also affiliated with the Department of Defense as an Army Reserve Officer. The author is responsible for the content of this article. The views expressed do not reflect the official policy or position of the National Intelligence University, the Department of Defense, the Office of the Director of National Intelligence, the U.S. Intelligence Community, or the U.S. Government.</span></em></p>People can trust each other because they understand how the human mind works, can predict people’s behavior, and assume that most people have a moral sense. None of these things are true of AI.Mark Bailey, Faculty Member and Chair, Cyber Intelligence and Data Science, National Intelligence UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2128602023-09-11T20:09:04Z2023-09-11T20:09:04ZWhy ChatGPT isn’t conscious – but future AI systems might be<figure><img src="https://images.theconversation.com/files/547419/original/file-20230911-27-sdkyzm.jpg?ixlib=rb-1.1.0&rect=0%2C59%2C8000%2C4431&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/3d-digital-abstract-human-face-on-2138818011">Shutterstock</a></span></figcaption></figure><p>In June 2022, Google engineer Blake Lemoine made headlines by claiming the company’s LaMDA chatbot had achieved sentience. The software had the conversational ability of a precocious seven-year-old, <a href="https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/">Lemoine said</a>, and we should assume it possessed a similar awareness of the world. </p>
<p>LaMDA, later released to the public as <a href="https://blog.google/technology/ai/bard-google-ai-search-updates/">Bard</a>, is powered by a “large language model” (LLM) of the kind that also forms the engine of OpenAI’s ChatGPT bot. Other big tech companies are rushing to deploy similar technology. </p>
<p>Hundreds of millions of people have now had the chance to play with LLMs, but few seem to believe they are conscious. Instead, in linguist and data scientist <a href="https://dl.acm.org/doi/pdf/10.1145/3442188.3445922">Emily Bender’s poetic phrase</a>, they are “stochastic parrots”, which chatter convincingly without understanding. But what about the next generation of artificial intelligence (AI) systems, and the one after that? </p>
<p>Our team of philosophers, neuroscientists and computer scientists looked to current scientific theories of how human consciousness works to draw up a <a href="https://arxiv.org/abs/2308.08708">list of basic computational properties</a> that any hypothetically conscious system would likely need to possess. In our view, no current system comes anywhere near the bar for consciousness – but at the same time, there’s no obvious reason future systems won’t become truly aware.</p>
<h2>Finding indicators</h2>
<p>Since computing pioneer Alan Turing proposed his “<a href="https://theconversation.com/turing-test-why-it-still-matters-123468">Imitation Game</a>” in 1950, the ability to successfully impersonate a human in conversation has often been taken as a reliable marker of consciousness. This is usually because the task has seemed so difficult it must require consciousness. </p>
<p>However, as with chess computer Deep Blue’s 1997 <a href="https://www.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/">defeat of grandmaster Gary Kasparov</a>, the conversational fluency of LLMs may just move the goalposts. Is there a principled way to approach the question of AI consciousness that does not rely on our intuitions about what is difficult or special about human cognition? </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-google-software-engineer-believes-an-ai-has-become-sentient-if-hes-right-how-would-we-know-185024">A Google software engineer believes an AI has become sentient. If he’s right, how would we know?</a>
</strong>
</em>
</p>
<hr>
<p>Our recent <a href="https://arxiv.org/abs/2308.08708">white paper</a> aims to do just that. We compared current scientific theories of what makes humans conscious to compile a list of “indicator properties” that could then be applied to AI systems. </p>
<p>We don’t think systems that possess the indicator properties are definitely conscious, but the more indicators, the more seriously we should take claims of AI consciousness. </p>
<h2>The computational processes behind consciousness</h2>
<p>What sort of indicators were we looking for? We avoided overt behavioural criteria – such as being able to hold conversations with people – because these tend to be both human-centric and easy to fake. </p>
<p>Instead, we looked at theories of the computational processes that support consciousness in the human brain. These can tell us about the sort of information-processing needed to support subjective experience. </p>
<p>“Global workspace theories”, for example, postulate that consciousness arises from the presence of a capacity-limited bottleneck which collates information from all parts of the brain and selects information to make globally available. “Recurrent processing theories” emphasise the role of feedback from later processes to earlier ones. </p>
<p>Each theory in turn suggests more specific indicators. Our final list contains 14 indicators, each focusing on an aspect of how systems <em>work</em> rather than how they <em>behave</em>. </p>
<p><iframe id="uvK17" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/uvK17/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<h2>No reason to think current systems are conscious</h2>
<p>How do current technologies stack up? Our analysis suggests there is no reason to think current AI systems are conscious. </p>
<p>Some do meet a few of the indicators. Systems using the transformer architecture, a kind of machine-learning model behind <a href="https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/">ChatGPT and similar tools</a>, meet three of the “global workspace” indicators, but lack the crucial ability for global rebroadcast. They also fail to satisfy most of the other indicators. </p>
<p>So, despite ChatGPT’s impressive conversational abilities, there is probably nobody home inside. Other architectures similarly meet at best a handful of criteria. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/not-everything-we-call-ai-is-actually-artificial-intelligence-heres-what-you-need-to-know-196732">Not everything we call AI is actually 'artificial intelligence'. Here's what you need to know</a>
</strong>
</em>
</p>
<hr>
<p>Most current architectures only meet a few of the indicators at most. However, for most of the indicators, there is at least one current architecture that meets it.</p>
<p>This suggests there are no obvious, in-principle technical barriers to building AI systems that satisfy most or all of the indicators. </p>
<p>It is probably a matter of <em>when</em> rather than <em>if</em> some such system is built. Of course, plenty of questions will still remain when that happens. </p>
<h2>Beyond human consciousness</h2>
<p>The scientific theories we canvass (and the authors of the paper!) don’t always agree with one another. We used a list of indicators rather than strict criteria to acknowledge that fact. This can be a powerful methodology in the face of scientific uncertainty. </p>
<p>We were inspired by similar debates about animal consciousness. Most of us think at least some nonhuman animals are conscious, despite the fact they cannot converse with us about what they’re feeling. </p>
<p>A 2021 <a href="https://www.lse.ac.uk/News/News-Assets/PDFs/2021/Sentience-in-Cephalopod-Molluscs-and-Decapod-Crustaceans-Final-Report-November-2021.pdf">report</a> from the London School of Economics arguing that cephalopods such as octopuses likely feel pain was instrumental <a href="https://www.abc.net.au/news/2021-12-16/the-uk-has-recognised-octopuses-crabs-and-lobsters-as-sentient-b/100698106">in changing UK animal ethics policy</a>. A focus on structural features has the surprising consequence that even some simple animals, like insects, <a href="https://theconversation.com/what-it-is-like-to-be-a-bee-insects-can-teach-us-about-the-origins-of-consciousness-57792">might even possess a minimal form of consciousness</a>. </p>
<p>Our report does not make recommendations for what to do with conscious AI. This question will become more pressing as AI systems inevitably become more powerful and widely deployed. </p>
<p>Our indicators will not be the last word – but we hope they will become a first step in tackling this tricky question in a scientifically grounded way.</p><img src="https://counter.theconversation.com/content/212860/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Colin Klein receives funding from The Templeton World Charity Foundation (TWCF-2020-20539)</span></em></p>The science of human consciousness offers new ways of gauging machine minds – and suggests there’s no obvious reason computers can’t develop awareness.Colin Klein, Professor, School of Philosophy, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2093922023-09-08T02:44:02Z2023-09-08T02:44:02ZMachine learning can level the playing field against match fixing – helping regulators spot cheating<p>On the eve of the Rugby World Cup kicking off, there have already been whispers of <a href="https://www.independent.ie/sport/rugby/rugby-world-cup/rugby-world-cup-touchlines-spying-stories-persist-as-ireland-hand-big-opportunity-to-joe-mccarthy/a833044452.html">teams spying</a> on each other. Inevitable gamesmanship, perhaps, but there’s no doubt cheating in sport is a problem authorities struggle to combat.</p>
<p>Our <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4493430">new machine learning model</a> could be a game changer when it comes to detecting questionable behaviour and unusual outcomes – especially the practice of <a href="https://sportnz.org.nz/resources/match-fixing-and-gambling-in-sport/">match fixing</a>. </p>
<p>Currently, the act of altering match outcomes for personal or team gain is largely picked up through abnormalities in sports betting markets. When bookmakers notice unusual odds or changes in the betting line, they alert regulators. </p>
<p>But this approach is limited and often fails to identify all match fixing, particularly in less popular sports or leagues. Here is where machine learning can help. </p>
<p>Essentially a subset of artificial intelligence (AI), machine learning acts as a digital probe: mining sports data, revealing hidden patterns, and flagging unusual events. Machines can delve into team performance and unexpected fluctuations, exploring all facets of sports events.</p>
<h2>Using AI to spot unusual activity</h2>
<p>As part of our research, we introduced the concept of “anomalous match identification”, which involved identifying irregular outcomes in games, no matter what the underlying causes might be.</p>
<p>There could be various factors at play, from strategic losses for future advantage – such as the <a href="https://en.as.com/nba/what-is-tanking-in-the-nba-and-why-do-teams-tank-n/">practice of “tanking”</a> in the US National Basketball league (NBA) – to marketing tactics to boost ticket sales, or just a day of poor performance.</p>
<p>Our research model allows us to flag unusual game results and turn them over to regulators for deeper investigation. By leveraging machine learning, we can spot abnormal matches by comparing our predictions with the actual game results.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-the-police-should-use-machine-learning-but-very-carefully-121524">Why the police should use machine learning – but very carefully</a>
</strong>
</em>
</p>
<hr>
<p>When we discuss anomalies in sports, we’re talking about matches that stand out from the norm.</p>
<p>While match fixing – deliberate manipulation of results for gain – is one possible explanation for unusual game results, it’s not the only one. Recognising the many reasons behind unusual match results can also help improve our understanding of the complexities of sports.</p>
<p>In the face of an unusual or unexpected result, spectators and officials may ask themselves: was this the result of an unforeseen strategy or are there other influences at play?</p>
<h2>Learning from basketball</h2>
<p>Our research methodology involved training machine learning algorithms to discover patterns between specific past events and subsequent game results.</p>
<p>Once these relationships are established, the algorithms can forecast likely future match outcomes. The discrepancies between these predictions and the actual results can flag potentially abnormal matches.</p>
<p>To test our model, we looked at whether there were any out-of-the-ordinary matches in the 2022 NBA playoffs. We built models using data from 2004 to 2020 to forecast match outcomes and then compared what the machine predicted with actual game results.</p>
<p>We found several anomalies in the 2022 playoffs, particularly in a series of games between the <a href="https://www.nba.com/game/dal-vs-phx-0042100227">Phoenix Suns and Dallas Mavericks</a>. In their seven matches against each other in May 2022, Dallas won four games and Phoenix won three. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/who-will-win-the-2023-rugby-world-cup-this-algorithm-uses-10-000-simulations-to-rank-the-contenders-212598">Who will win the 2023 Rugby World Cup? This algorithm uses 10,000 simulations to rank the contenders</a>
</strong>
</em>
</p>
<hr>
<p>According to the data, the anomalies in the 2022 playoffs included a 0.0000064 probability of the Suns and Mavericks actually playing against each other in the semi-final series of NBA’s Western Conference – which includes 15 teams. </p>
<p>We also identified several players with performances during the playoffs that were particularly abnormal based on the data from their previous games. </p>
<p>This is not to say there was any match fixing involved. Rather, our results flag games and players that could then be followed up by regulators <em>if</em> match fixing was a concern – which it was not, this was simply an example to test the model.</p>
<p>This approach to spotting anomalies within a series of matches can be applied across many sports.</p>
<p>Scrutinising a significant number of anomalies can offer valuable insights into unusual match events, helping regulatory bodies and sports organisations conduct thorough investigations and uphold fair competition.</p>
<h2>Encouraging trust in sports</h2>
<p>Though our study concentrates on specific sports, the principles and techniques can expand to other arenas.</p>
<p>The study shows that machine learning can be utilised to help safeguard the integrity of sports competitions, and to assist regulatory bodies, sports organisations and law enforcement agencies maintain fairness and public trust.</p>
<p>But as we embrace the potential of machine learning, we must also navigate the ethical implications and ensure its transparent use. </p>
<p>The future of sports may well see artificial intelligence become the fans’ ally, helping ensure a level playing field where talent excels, and spectators revel in the authenticity of sporting events.</p><img src="https://counter.theconversation.com/content/209392/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A new machine learning model can pinpoint anomalies in sports results – whether from match fixing, strategic losses or poor player performance. It could be a useful tool in the fight against cheating.Dulani Jayasuriya, Lecturer in Accounting and Finance, University of Auckland, Waipapa Taumata RauJacky Liu, Graduate Teaching Assistant, University of Auckland, Waipapa Taumata RauRyan Elmore, Associate Professor of Business Information and Analytics, University of DenverLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2114482023-08-23T11:07:16Z2023-08-23T11:07:16ZWe’re talking about AI a lot right now – and it’s not a moment too soon<figure><img src="https://images.theconversation.com/files/542343/original/file-20230811-23-omh1qf.jpg?ixlib=rb-1.1.0&rect=33%2C0%2C7372%2C4008&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/ai-technology-artificial-intelligence-man-using-2263545623">LookerStudio / Shutterstock</a></span></figcaption></figure><p>When OpenAI unchained the “beast” that is ChatGPT <a href="https://venturebeat.com/ai/chatgpt-launched-six-months-ago-its-impact-and-fallout-is-just-%20beginning-the-ai-beat/">back in November 2022</a>, the pace of market competition between tech companies involved in AI increased exponentially.</p>
<p>Market competition determines the price of goods and services, their quality and the speed of innovation – which has been remarkable in the AI industry. However, some experts believe we are <a href="https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-%20chatgpt-built-openai/">deploying the most powerful technology</a> in the world <a href="https://time.com/6281737/ai-we-cant-trust-big-tech-gary-marcus/">far too quickly</a>.</p>
<p>This could hamper our ability to detect serious problems before they’ve caused damage, resulting in profound implications for society, particularly when we can’t anticipate the capabilities of something that may end up having the ability to train itself.</p>
<p>But AI is nothing new – and while ChatGPT may have taken many people by surprise, the seeds of the current commotion over this technology were laid years ago.</p>
<h2>Is AI new?</h2>
<p>The origins of modern AI can be traced back to developments in the 1950s when Alan Turing worked to solve complex mathematical problems to <a href="https://qbi.uq.edu.au/brain/intelligent-machines/history-artificial-intelligence">test machine intelligence</a>. </p>
<p>Limited resources and computational power available at the time hindered growth and adoption. But breakthroughs in machine learning, neural networks, and data availability fuelled a resurgence of AI around the early 2000s. That prompted many industries to embrace AI. The finance and telecommunications sectors used it for <a href="https://www.mckinsey.com/featured-insights/artificial-intelligence/the-promise-and-%20challenge-of-the-age-of-artificial-intelligence">fraud detection and data analytics</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/OQSMr-3GGvQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">TED talk by journalist Carole Cadwalladr on the topic of AI.</span></figcaption>
</figure>
<p>An explosion of data, <a href="https://dev.to/aws-builders/the-role-of-ai-in-cloud-computing-a-beginners-guide-to-%20starting-a-career-%204h2#:%7E:text=AI%20and%20cloud%20computing%20work,deploy%20AI%20models%20at%20scale.">the development</a> of <a href="https://medium.com/@raosrinivas2580/how-cloud-computing-influences-artificial-intelligence-%205f1a8a2f2d5a">cloud computing</a> and the availability of huge computing resources all later facilitated the development of AI algorithms. This significantly shaped what could be done with – for example, image and video recognition and targeted advertising.</p>
<p>Why is AI getting so much attention now? AI has long been used in social media, to recommend relevant posts, articles, videos, and ads. The technology ethicist Tristan Harris says social media is broadly humanity’s “first contact” with AI.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1677070546717278209"}"></div></p>
<p>And humanity has learned that AI-driven algorithms on social media platforms can spread <a href="https://www.apa.org/topics/journalism-facts/misinformation-disinformation">disinformation and misinformation</a> – polarising public opinion and <a href="https://reutersinstitute.politics.ox.ac.uk/echo-chambers-filter-bubbles-and-polarisation-literature-review#header--4">fostering online echo chambers</a>. Campaigns spent money on targeting voters online in both the 2016 US presidential election and <a href="https://committees.parliament.uk/committee/378/digital-culture-media-and-sport-%20committee/news/103668/fake-news-report-published-17-19/%5D">the UK Brexit vote</a>.</p>
<p>Both events led to public awareness about AI and how technology could be used to manipulate political outcomes. These high-profile incidents <a href="https://www.thetimes.co.uk/article/yoshua-bengio-ai-safety-artificial-intelligence-%20x9mknfnr5">set in motion concerns</a> about the capabilities of evolving technologies.</p>
<p>However, in 2017, a <a href="https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/">new class of AI emerged</a>. This technology is known as a transformer. It’s a machine learning model which processes language and then uses that to produce its own text and have conversations. </p>
<p>This breakthrough facilitated the creation of large language models such as ChatGPT, which can understand and generate text which resembles that written by humans. Transformer-based models such as OpenAI’s GPT (Generative Pre-trained Transformer) have demonstrated impressive capabilities in <a href="https://towardsdatascience.com/what-is-gpt-3-and-why-is-it-so-powerful-21ea1ba59811">generating coherent and relevant text</a>.</p>
<p>The difference with transformers is that, as they absorb new information, they learn from it. This potentially allows them to gain new capabilities that engineers did not programme into them.</p>
<h2>Bigger issue</h2>
<p>The processing power now available and the capabilities of the latest AI models mean that as-yet unresolved concerns around the impact of social media on society – especially on younger generations – will only grow.</p>
<p>Lucy Batley, the boss of Traction Industries, a private-sector company which helps businesses integrate AI into their operations, says that the type of analysis that social media companies can carry out on our personal data – and the detail they can extract – is “going to be automated and accelerated to a point where big tech moguls will potentially know more about us than we consciously do about ourselves”. </p>
<p>But <a href="https://www.forbes.com/sites/qai/2023/01/24/quantum-computing-is-coming-and-its-reinventing-the-tech-industry/">quantum computing</a>, which has experienced major breakthroughs in recent years, may far <a href="https://www.independent.co.uk/tech/google-quantum-computer-apocalypse-encryption-password-security-b2393516.html">surpass the performance</a> of conventional computers on particular tasks. Batley believes this would “allow the development of much more capable AI systems to probe multiple aspects of our lives”.</p>
<p>The situation for “big tech” and the countries that are leading in AI can be likened to what game theorists call the “prisoner’s dilemma”. This is a condition where two parties must either decide to work together to solve a problem, or betray each other. They face a tough choice between an event where one party gains – keeping in mind betraying often yields a higher reward – or one <a href="https://plato.stanford.edu/entries/prisoner-dilemma/">with the potential for mutual benefit</a>. </p>
<p>Let’s take a scenario where we have two competing tech companies. They need to decide whether they should cooperate by sharing their research on cutting-edge technology or keep their research secret. If both companies collaborate, they could make significant advancements together. However, if Company A shares while Company B doesn’t, Company A probably loses its competitive edge.</p>
<p>This is not too dissimilar from the current situation that the US finds itself in. The US is trying to accelerate AI to beat foreign competition. As such policymakers have been slow to discuss AI regulation, which would help protect society from harms caused by use of the technology.</p>
<h2>Uncharted territory</h2>
<p>This potential for AI to create societal problems must be averted. We have a duty to understand them and we need a collective focus to avoid the mistakes that have previously been made with social media. We were too late to regulate social media. By the time that conversation entered the public domain, social platforms had already entangled themselves with the media, elections, businesses and users’ lives. </p>
<p>The first major global summit on AI safety is planned for later <a href="https://www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-%20intelligence">this year, in the UK</a>. This is an opportunity for policymakers and world leaders to consider the immediate and future risks of AI and how these risks can be mitigated via a globally coordinated approach. This is also a chance to invite a broader range of voices from society to discuss this significant issue, resulting in a more diverse array of perspectives on a complex matter that will affect everyone.</p>
<p>AI has huge potential to increase the quality of life on Earth, but we all have a duty to help encourage the development of responsible AI systems. We must also collectively push for brands to operate with ethical guidelines within regulatory frameworks. The best time to influence a medium is at the very start of its journey.</p><img src="https://counter.theconversation.com/content/211448/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kimberley Hardcastle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The seeds of the current commotion over AI were laid years ago.Kimberley Hardcastle, Assistant Professor in Marketing, Northumbria University, NewcastleLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2106632023-08-10T12:25:02Z2023-08-10T12:25:02ZAI threatens to add to the growing wave of fraud but is also helping tackle it<figure><img src="https://images.theconversation.com/files/541723/original/file-20230808-19-q8t3ng.jpg?ixlib=rb-1.1.0&rect=0%2C24%2C5452%2C3812&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The government, banks and other financial organisations are now dealing with fraud by using increasingly sophisticated detection methods.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/internet-fraud-darknet-data-thiefs-cybercrime-1716862513">Maksim Shmeljov/Shutterstock</a></span></figcaption></figure><p>There were <a href="https://www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/articles/natureoffraudandcomputermisuseinenglandandwales/yearendingmarch2022">4.5 million</a> reported incidents of fraud in the UK in 2021/22, up 25% on the year before. It is a growing problem which costs billions of pounds every year. </p>
<p>The COVID pandemic and the cost of living crisis have created <a href="https://www.bbc.co.uk/news/business-55769991">ideal conditions</a> for fraudsters to exploit the vulnerability and desperation of many households and businesses. And with the use of AI increasing in general, we will likely see a further increase in <a href="https://www2.deloitte.com/uk/en/blog/auditandassurance/2023/generative-ai-and-fraud-what-are-the-risks-that-firms-face.html">new types of fraud</a> and is probably contributing to the increased frequency of fraud we are seeing today. </p>
<p>Already, the ability of AI to absorb personal data, such as emails, photographs, videos and <a href="https://www.cbsnews.com/news/scammers-ai-mimic-voices-loved-ones-in-distress/#:%7E:text=Artificial%20intelligence%20is%20making%20phone,mounting%20losses%20due%20to%20fraud.">voice recordings</a> to imitate people is proving to be a new and unprecedented challenge. </p>
<p>But there is also an upside. The government, banks and other financial organisations are now fighting back with increasingly sophisticated fraud-detection methods. AI and machine learning models could be a <a href="https://www.weforum.org/agenda/2023/04/as-generative-ai-gains-pace-industry-leaders-explain-how-to-make-it-a-force-for-good/">part of the solution</a> to deal with the increasing complexity, sophistication and prevalence of such scams.</p>
<p>The rising gap between prices and people’s incomes appears to have made people more <a href="https://www.citizensadvice.org.uk/about-us/about-us1/media/press-releases/over-40-million-targeted-by-scammers-as-the-cost-of-living-crisis-bites/">receptive</a> to scams which offer grants, rebates and support payments. </p>
<p>Fraudsters often target individuals by posing as genuine organisations. Examples include pretending to be your bank or posing as the government telling you that you are eligible for a lucrative scheme, in order to steal your identity details and then money. </p>
<p>This follows a dramatic rise in recent years of fraudulent applications to government and regional support packages, mainly implemented in response to the pandemic. Here fraudsters often pose as fake businesses to secure multiple loans or grants. </p>
<p>One of the <a href="https://www.manchestereveningnews.co.uk/news/greater-manchester-news/man-who-pretended-greggs-bakery-27251086">most outlandish examples</a> of this was a Luton man who posed as a Greggs bakery to swindle three local authorities in England out of almost £200,000 worth of COVID small business grants.</p>
<p>The hurried roll out of such schemes for faster economic impact made it difficult for officials to effectively review applications. The UK government’s Department for Business and Trade now <a href="https://www.bbc.co.uk/news/business-59504943">estimates</a> that 11% of such loans, roughly £5 billion, were fraudulent. By March 2022 only £762 million <a href="https://www.gov.uk/government/publications/hmrc-issue-briefing-tackling-error-and-fraud-in-the-covid-19-support-schemes/tackling-error-and-fraud-in-the-covid-19-support-schemes">had been recovered</a>.</p>
<h2>Fraud detection</h2>
<p>Over the past few years, complex mathematical models combining traditional statistical techniques and machine learning analysis have shown promise in the <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/acfi.12742">early detection</a> of financial statement fraud. This is when companies typically misrepresent or deceive investors into believing they are more profitable than they really are.</p>
<p>One of the breakthroughs has been the incorporation of both financial and non-financial information into data analysis systems. For example, the risk of fraud decreases if there is <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/acfi.12742">better corporate governance</a> and a lower proportion of directors who are also executives. </p>
<p>In a small business context, we can think about this as promoting transparency and making sure that important positions do not have sole authority to make significant decisions. </p>
<p>Such data analytics models can be used to rank applications in terms of potential fraud risk, so that the riskiest applications get additional scrutiny by government officials. We are now starting to see implementations of such systems to tackle <a href="https://www.theguardian.com/society/2023/jul/11/use-of-artificial-intelligence-widened-to-assess-universal-credit-applications-and-tackle">universal credit</a> fraud, for example.</p>
<p><a href="https://www.ft.com/content/0dca8946-05c8-11e8-9e12-af73e8db3c71">Banks, financial services providers</a> and <a href="https://www.ft.com/content/d3bd46cb-75d4-40ff-a0cd-6d7f33d58d7f">insurers</a> are developing machine-learning models to detect financial fraud too. A Bank of England survey published in October 2022 <a href="https://www.bankofengland.co.uk/report/2022/machine-learning-in-uk-financial-services">revealed</a> that 72% of financial services firms are already testing and implementing them. </p>
<p>We are also seeing new collaborations in the industry, with the likes of Deutsche Bank partnering with chip maker Nvidia to <a href="https://www.db.com/news/detail/20221207-deutsche-bank-partners-with-nvidia-to-embed-ai-into-financial-services">embed AI</a> into their fraud detection systems.</p>
<h2>Risks of AI systems</h2>
<p>However, the advent of new automated AI systems bring with it worries of potential unintended biases within them. In a <a href="https://www.bbc.co.uk/news/uk-politics-66133665">recent trial</a> of a new AI fraud detection system by the Department of Work and Pensions, campaign groups were worried about potential biases. </p>
<p>A common issue that needs to be overcome with such systems is that they work for the majority of people, but are often biased against minority groups. This means if left unadjusted they are disproportionately more likely to flag applications from ethnic minorities as risky.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/scams-deepfake-porn-and-romance-bots-advanced-ai-is-exciting-but-incredibly-dangerous-in-criminals-hands-199004">Scams, deepfake porn and romance bots: advanced AI is exciting, but incredibly dangerous in criminals' hands</a>
</strong>
</em>
</p>
<hr>
<p>But AI systems should not be used as a fully automated process to detect and accuse fraud but rather <a href="https://www.ft.com/content/2df33fc5-981a-4952-8dc6-d4eee7343acc">as a tool</a> to assist assessors. They can help auditors and civil servants, for example, to identify cases where greater scrutiny is required and to reduce processing time.</p><img src="https://counter.theconversation.com/content/210663/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Adrian Gepp has received funding from the Accounting and Finance Association of Australia and New Zealand. He is also affiliated with the Association of Certified Fraud Examiners. </span></em></p><p class="fine-print"><em><span>Laurence Jones does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Fraud was up 25% in the UK in 2021/22.Laurence Jones, Lecturer in Finance, Bangor UniversityAdrian Gepp, Professor of Data Analytics, Bangor UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2101702023-08-09T12:32:17Z2023-08-09T12:32:17ZAI can help forecast air quality, but freak events like 2023’s summer of wildfire smoke require traditional methods too<figure><img src="https://images.theconversation.com/files/541336/original/file-20230805-83673-xiqg41.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C3494%2C2331&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Thick smoke rolling in from Canada's 2023 wildfires was a wakeup call for several cities.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/people-wear-masks-as-they-wait-for-the-tramway-to-roosevelt-news-photo/1258511415">Eduardo Munoz Alvarez/Getty Images</a></span></figcaption></figure><p>Wildfire smoke from <a href="https://twitter.com/_HannahRitchie/status/1685583683707682816">Canada’s extreme fire season</a> has left a lot of people thinking about air quality and wondering what to expect in the days ahead.</p>
<p>All air contains gaseous compounds and small particles. But as air quality gets worse, these gases and particles can <a href="https://theconversation.com/extreme-heat-and-air-pollution-can-be-deadly-with-the-health-risk-together-worse-than-either-alone-187422">trigger asthma</a> and <a href="https://theconversation.com/wildfire-smoke-can-harm-human-health-even-when-the-fire-is-burning-hundreds-of-miles-away-a-toxicologist-explains-why-206057">exacerbate heart and respiratory problems</a> as they enter the nose, throat and lungs and even circulate in the bloodstream. When wildfire smoke turned New York City’s skies orange in early June 2023, <a href="https://gothamist.com/news/nyc-hospitals-saw-twice-as-many-asthma-er-visits-as-bad-air-blanketed-city">emergency room visits</a> for asthma doubled.</p>
<p>In <a href="https://www.airnow.gov/">most cities</a>, it’s easy to find a daily <a href="https://www.lung.org/clean-air/outdoors/air-quality-index">air quality index score</a> that tells you when the air is considered unhealthy or even hazardous. However, predicting air quality in the days ahead isn’t so simple.</p>
<p>I work on air quality forecasting as a <a href="https://cee.utk.edu/people/joshua-s-fu/">professor of civil and environmental engineering</a>. Artificial intelligence has improved these forecasts, but research shows it’s much more useful when paired with traditional techniques. Here’s why:</p>
<h2>How scientists predict air quality</h2>
<p>To predict air quality in the near future – a few days ahead or longer – scientists generally rely on two <a href="https://www.airnow.gov/aqi/aqi-basics/using-air-quality-index/#forecasts">main methods</a>: a <a href="https://www.airnow.gov/sites/default/files/2020-06/aq-forecasting-guidance-1016.pdf">chemical transport model</a> or a machine-learning model. These two models generate results in totally different ways.</p>
<p>Chemical transport models use lots of known chemical and physical formulas to calculate the presence and production of air pollutants. They use data from emissions inventories reported by local agencies that list pollutants from known sources, such as wildfires, traffic <a href="https://www.epa.gov/air-emissions-inventories/2020-nei-supporting-data-and-summaries">or factories</a>, and data from meteorology that provides atmospheric information, such as wind, precipitation, temperature and solar radiation.</p>
<p>These models simulate the flow and chemical reactions of the air pollutants. However, their simulations involve multiple variables with huge uncertainties. Cloudiness, for example, changes the incoming solar radiation and thus the photochemistry. This can make the results less accurate.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/541950/original/file-20230809-15-9ddhgg.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A map shows many yellow dots through the Midwest. in particular, where wildfire smoke has been blowing in from Canada." src="https://images.theconversation.com/files/541950/original/file-20230809-15-9ddhgg.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/541950/original/file-20230809-15-9ddhgg.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=404&fit=crop&dpr=1 600w, https://images.theconversation.com/files/541950/original/file-20230809-15-9ddhgg.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=404&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/541950/original/file-20230809-15-9ddhgg.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=404&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/541950/original/file-20230809-15-9ddhgg.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=508&fit=crop&dpr=1 754w, https://images.theconversation.com/files/541950/original/file-20230809-15-9ddhgg.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=508&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/541950/original/file-20230809-15-9ddhgg.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=508&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The EPA’s AirNow air pollution forecasts use machine learning. During wildfire events, a smoke-transport and dispersion model helps to simulate the spread of smoke plumes. This map is the forecast for Aug. 9, 2023. Yellow indicates moderate risk; orange indicates unhealthy air for sensitive groups.</span>
<span class="attribution"><a class="source" href="https://gispub.epa.gov/airnow/index.html">AirNow.gov</a></span>
</figcaption>
</figure>
<p>Machine-learning models instead learn patterns over time from historical data to predict future air quality for any given region, and then apply that knowledge to current conditions to predict the future. </p>
<p>The downside of machine-learning models is that they do not consider any chemical and physical mechanisms, as chemical transport models do. Also, the accuracy of machine-learning projections under extreme conditions, such as heat waves or wildfire events, can be off if the models weren’t trained on such data. So, while machine-learning models can show where and when high pollution levels are most likely, such as during rush hour near freeways, they generally cannot deal with more random events, like wildfire smoke blowing in from Canada. </p>
<h2>Which is better?</h2>
<p>Scientists have determined that neither model is accurate enough on its own, but using the <a href="https://doi.org/10.1016/j.atmosenv.2022.118961">best attributes of both</a> models together <a href="https://doi.org/10.1016/j.envint.2023.107969">can help better predict the quality</a> of the air we breathe. </p>
<p>This combined method, known as the machine-learning – measurement model fusion, or ML-MMF, has the ability to provide science-based predictions with <a href="https://doi.org/10.1016/j.envint.2023.107969">more than 90% accuracy</a>. It is based on known physical and chemical mechanisms and can simulate the whole process, from the air pollution source to your nose. Adding satellite data can help them inform the public on both air quality safety levels and the direction pollutants are traveling with greater accuracy. </p>
<p>We recently <a href="https://doi.org/10.1016/j.envint.2023.107969">compared predictions from all three models</a> with actual pollution measurements. The results were striking: The combined model was 66% more accurate than the chemical transport model and 12% more accurate than the machine-learning model alone.</p>
<p>The chemical transport model is still the most common method used today to predict air quality, but applications with machine-learning models are becoming more popular. The regular <a href="https://gispub.epa.gov/airnow/index.html">forecasting method</a> used by the U.S. Environmental Protection Agency’s <a href="https://www.airnow.gov/">AirNow.gov</a> relies on machine learning. The site also compiles air quality forecast results from state and local agencies, most of which use <a href="https://www.epa.gov/cmaq">chemical transport</a> <a href="https://www.camx.com/">models</a>.</p>
<p>As information sources become more reliable, the combined models will become more accurate ways to forecast hazardous air quality, particularly during unpredictable events like wildfire smoke.</p><img src="https://counter.theconversation.com/content/210170/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joshua S. Fu received funding from U. S. EPA for wildfire and human health studies. </span></em></p>Air quality forecasting is getting better, thanks in part to AI. That’s good, given the health impact of air pollution. An environmental engineer explains how systems warn of incoming smog or smoke.Joshua S. Fu, Chancellor's Professor in Engineering, Climate Change and Civil and Environmental Engineering, University of TennesseeLicensed as Creative Commons – attribution, no derivatives.