tag:theconversation.com,2011:/us/topics/algorithm-1795/articlesAlgorithm – The Conversation2024-03-26T17:01:56Ztag:theconversation.com,2011:article/2262572024-03-26T17:01:56Z2024-03-26T17:01:56ZHow long before quantum computers can benefit society? That’s Google’s US$5 million question<figure><img src="https://images.theconversation.com/files/583117/original/file-20240320-26-rmpub2.jpg?ixlib=rb-1.1.0&rect=5%2C0%2C3828%2C2160&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/quantum-computer-black-background-3d-render-1571871052">Bartlomiej K. Wroblewski / Shutterstock</a></span></figcaption></figure><p>Google and the XPrize Foundation have launched a competition worth US$5 million (£4 million) to develop <a href="https://blog.google/technology/research/google-gesda-and-xprize-launch-new-competition-in-quantum-applications/">real-world applications for quantum computers</a> that benefit society – by speeding up progress on one of the UN Sustainable Development Goals, for example. The principles of quantum physics suggest quantum computers could perform very fast calculations on particular problems, so this competition may expand the range of applications where they have an advantage over conventional computers.</p>
<p>In our everyday lives, the way nature works can generally be described by what we call <a href="https://en.wikipedia.org/wiki/Classical_physics#:%7E:text=Classical%20physical%20concepts%20are%20often,of%20quantum%20mechanics%20and%20relativity.">classical physics</a>. But nature behaves very differently at tiny quantum scales – below the size of an atom. </p>
<p>The race to harness quantum technology can be viewed as a new industrial revolution, progressing from devices that use the properties of classical physics to those utilising the <a href="https://www.energy.gov/science/doe-explainsquantum-mechanics#:%7E:text=Quantum%20mechanics%20is%20the%20field,%E2%80%9Cwave%2Dparticle%20duality.%E2%80%9D">weird and wonderful properties of quantum mechanics</a>. Scientists have spent decades trying to develop new technologies by harnessing these properties. </p>
<p>Given how often we are told that <a href="https://projects.research-and-innovation.ec.europa.eu/en/horizon-magazine/quantum-technologies">quantum technologies</a> will revolutionise our everyday lives, you may be surprised that we still have to search for practical applications by offering a prize. However, while there are numerous examples of success using quantum properties for enhanced precision in sensing and timing, there has been a surprising lack of progress in the development of quantum computers that outdo their classical predecessors.</p>
<p>The main bottleneck holding up this development is that the software – using <a href="https://www.nature.com/articles/npjqi201523">quantum algorithms</a> –
needs to demonstrate an advantage over computers based on classical physics. This is commonly known as <a href="https://theconversation.com/what-is-quantum-advantage-a-quantum-computing-scientist-explains-an-approaching-milestone-marking-the-arrival-of-extremely-powerful-computers-213306">“quantum advantage”</a>.</p>
<p>A crucial way quantum computing differs from classical computing is in using a property known as <a href="https://spectrum.ieee.org/what-is-quantum-entanglement">“entanglement”</a>. Classical computing <a href="https://web.stanford.edu/class/cs101/bits-bytes.html">uses “bits”</a> to represent information. These bits consist of ones and zeros, and everything a computer does comprises strings of these ones and zeros. But quantum computing allows these bits to be in a <a href="https://azure.microsoft.com/en-gb/resources/cloud-computing-dictionary/what-is-a-qubit">“superposition” of ones and zeros</a>. In other words, it is as if these ones and zeros occur simultaneously in the quantum bit, or qubit.</p>
<p>It is this property which allows computational tasks to be performed all at once. Hence the belief that quantum computing can offer a significant advantage over classical computing, as it is able to perform many computing tasks at the same time. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-is-quantum-advantage-a-quantum-computing-scientist-explains-an-approaching-milestone-marking-the-arrival-of-extremely-powerful-computers-213306">What is quantum advantage? A quantum computing scientist explains an approaching milestone marking the arrival of extremely powerful computers</a>
</strong>
</em>
</p>
<hr>
<h2>Notable quantum algorithms</h2>
<p>While performing many tasks simultaneously should lead to a performance increase over classical computers, putting this into practice has proven more difficult than theory would suggest. There are actually only a few notable quantum algorithms which can perform their tasks better than those using classical physics.</p>
<figure class="align-center ">
<img alt="Quantum chips - rendering" src="https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=369&fit=crop&dpr=1 600w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=369&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=369&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=464&fit=crop&dpr=1 754w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=464&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/583127/original/file-20240320-20-fnde2i.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=464&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/futuristic-cpu-quantum-processor-global-computer-1210158169">Yurchanka Siarhei / Shutterstock</a></span>
</figcaption>
</figure>
<p>The most notable are the <a href="https://www.st-andrews.ac.uk/physics/quvis/simulations_html5/sims/cryptography-bb84/Quantum_Cryptography.html">BB84 protocol</a>, developed in 1984, and <a href="https://www.nature.com/articles/s41598-021-95973-w">Shor’s algorithm</a>, developed in 1994, both of which use entanglement to outperform classical algorithms on particular tasks. </p>
<p>The BB84 protocol is a cryptographic protocol – a system for ensuring secure, private communication between two or more parties which is considered more secure than comparable classical algorithms.</p>
<p>Shor’s algorithm uses entanglement to demonstrate how current <a href="https://www.rand.org/pubs/commentary/2023/09/when-a-quantum-computer-is-able-to-break-our-encryption.html#:%7E:text=One%20of%20the%20most%20important,secure%20internet%20traffic%20against%20interception.">classical encryption protocols can be broken</a>, because they are based on the factorisation of very large numbers. <a href="https://ieeexplore.ieee.org/document/365700">There is also evidence</a> that it can perform certain calculations faster than similar algorithms designed for conventional computers. </p>
<p>Despite the superiority of these two algorithms over conventional ones, few advantageous quantum algorithms have followed. However, researchers have not given up trying to develop them. Currently, there are a couple of main directions in research.</p>
<h2>Potential quantum benefits</h2>
<p>The first is to use quantum mechanics to assist in what are called <a href="https://arxiv.org/abs/2312.02279">large-scale optimisation tasks</a>. Optimisation – finding the best or most effective way to solve a particular task – is vital in everyday life, from ensuring traffic flow runs effectively, to managing operational procedures in factory pipelines, to streaming services deciding what to recommend to each user. It seems clear that quantum computers could help with these problems.</p>
<p>If we could reduce the computational time required to perform the optimisation, it could save energy, reducing the carbon footprint of the many computers currently performing these tasks around the world and the data centres supporting them.</p>
<p>Another development that could offer wide-reaching benefits is to use quantum computation to simulate systems, such as combinations of atoms, that behave according to quantum mechanics. Understanding and predicting how quantum systems work in practice could, for example, lead to better drug design and medical treatments. </p>
<p>Quantum systems could also lead to improved electronic devices. As computer chips get smaller, quantum effects take hold, potentially reducing the devices’s performance. A better fundamental understanding of quantum mechanics could help avoid this.</p>
<p>While there has been significant investment in building quantum computers, there has been less focus on ensuring they will directly benefit the public. However, that now appears to be changing.</p>
<p>Whether we will all have quantum computers in our homes within the next 20 years remains doubtful. But, given the current financial commitment to making quantum computation a practical reality, it seems that society is finally in a better position to make use of them. What precise form will this take? There’s US$5 million dollars on the line to find out.</p><img src="https://counter.theconversation.com/content/226257/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Adam Lowe does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Quantum computing has huge promise from a technical perspective, but the practical benefits are less clear.Adam Lowe, Lecturer, School of Computer Science and Digital Technologies, Aston UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2248822024-03-26T16:03:37Z2024-03-26T16:03:37ZWe built an AI tool to help set priorities for conservation in Madagascar: what we found<p><em>Artificial Intelligence (AI) – models that process large and diverse datasets and make predictions from them – can have many uses in nature conservation, such as remote monitoring (like the use of camera traps to study animals or plants) or data analysis. Some of these are controversial because AI can be trained to be biased, but others are valuable research tools.</em></p>
<p><em>Biologist Daniele Silvestro has developed an <a href="https://www.nature.com/articles/s41893-022-00851-6">AI tool</a> that can help identify conservation and restoration priorities. We asked him to tell us more about how it works and what it offers.</em></p>
<hr>
<h2>How does your artificial intelligence tool for conservation work?</h2>
<p>Artificial intelligence (AI) is a term indicating a broad family of models used to process large and diverse datasets and make predictions from them. </p>
<p>We built a model using biodiversity datasets as well as socioeconomic data. The aim was to identify optimal strategies to conserve nature. Our AI tool, Conservation Area Prioritisation through Artificial Intelligence (Captain), uses a type of AI called <a href="https://online.york.ac.uk/what-is-reinforcement-learning/">reinforcement learning</a>. This is a family of algorithms that optimises decisions within a dynamic environment. </p>
<p>The tool we built was the result of years of work involving an international team with experience in biology, sustainable economics, maths and computer science.</p>
<p>The software we developed can take multiple types of data as input, including biodiversity maps, species ranges, climate and predicted climate change, as well as socioeconomic data such as cost of land and a budget available for conservation action. It then processes this information and, based on a set conservation target (for example, to include all endangered species in a protected area, or to protect as many species as possible) it suggests a conservation policy.</p>
<p>The tool’s environment is a simulation of biodiversity, an artificial world with species and individuals that reproduce, migrate and die through time. We use the tool to look for the most appropriate conservation policy. </p>
<p>It works similarly to a video game where the player (called the agent) is the “brains” of our software. The goal of the game is to protect biodiversity and prevent as many species as possible from going extinct within a simulated environment that includes human pressure and climate change. </p>
<p>The agent observes the environment and tries to place protected areas in this environment in the best way. At the end of the game the agent gets a reward for each species it manages to save from extinction. It will have to play the game many times to learn how to best interpret the environment and place the protected areas. After that, the model is trained and can be used with real biodiversity data to identify conservation priorities that should maximise biodiversity protection. </p>
<h2>Why did you test the tool in Madagascar? What did you find?</h2>
<p>The <a href="https://www.kew.org/sites/default/files/2023-10/State%20of%20the%20World%27s%20Plants%20and%20Fungi%202023.pdf">State of the World’s Plants and Fungi report</a> showed that biodiversity is facing unprecedented threats, with as many as 45% of all plant species at risk of extinction. Together with climate change, this is one of the major challenges humanity faces, given our dependence on the natural world for our survival. </p>
<p>In a recent <a href="https://www.science.org/doi/full/10.1126/science.adf1466">paper</a> we summarised the extent of Madagascar’s extraordinary concentration of biodiversity with thousands of species of plants, animals and fungi. The project was led by Hélène Ralimanana of the Royal Botanic Gardens, Kew and Kew Madagascar Conservation Centre. </p>
<p>By applying the Captain tool to a dataset of endemic trees of Madagascar we were able to identify the most important areas for biodiversity protection in the country, for instance the area in the Sava region, where the Marojejy National Park has long been established. </p>
<p>Madagascar already has number of conservation areas and programmes. What our experiment shows is that the technology we developed can be used with real-world data. We hope it can guide conservation planning.</p>
<h2>Who do you think can use the Captain AI?</h2>
<p>We think it can help policy makers, practitioners and companies in guiding conservation and restoration planning. In particular, the software can use diverse types of data in addition to biodiversity data. For instance it can use costs and opportunity costs related to setting up protected or restoration areas. It can also use future climate scenarios. </p>
<h2>Is technology alone enough to conserve biodiversity?</h2>
<p>Certainly not. Technology can help us by crunching the numbers and disentangling complex data. But there are many aspects of conservation that are not easily quantifiable as numbers. There are aspects of cultural value of land and nature, and social and political issues related to the fair distribution of resources. These are issues for real humans to take into account, rather than artificial intelligence programs. </p>
<p>Technology and science can (and should) assist us in making decisions, but ultimately the protection and conservation of the natural world is and must be in the hands of humans, not software.</p><img src="https://counter.theconversation.com/content/224882/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Daniele Silvestro is a computational biologist at the University of Fribourg (Switzerland) and University of Gothenburg (Sweden). He is also a co-founder of CAPTAIN Technologies LTD.
D.S. acknowledges funding from the Swiss National Science Foundation (PCEFP3_187012), the Swedish Research Council (2019-04739), and the Swedish Foundation for Strategic Environmental Research MISTRA within the framework of the research programme BIOPATH (F 2022/1448).</span></em></p>Conservation of biodiversity is in the hands of humans but artificial intelligence can help guide decisions.Daniele Silvestro, Assistant Professor, Department of Biology, University of FribourgLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2240442024-03-13T12:28:20Z2024-03-13T12:28:20ZRobo-advisers are here – the pros and cons of using AI in investing<figure><img src="https://images.theconversation.com/files/580679/original/file-20240308-28-55toe3.jpg?ixlib=rb-1.1.0&rect=59%2C0%2C7951%2C4345&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">shutterstock</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/smart-businessman-hand-close-nft-financial-2074315681">thinkhubstudio/Shutterstock</a></span></figcaption></figure><p>Artificial intelligence (AI) is <a href="https://www.ft.com/content/6766a3bd-1cec-4e88-9f51-5ed93b39528c">shaking up</a> the way we invest our money. Gone are the days when complex tools were reserved for the wealthy or financial institutions. </p>
<p>AI-powered <a href="https://www.investopedia.com/best-robo-advisors-4693125">robo-advisers</a>, such as <a href="https://www.betterment.com/">Betterment</a> and <a href="https://investor.vanguard.com/advice/robo-advisor">Vanguard</a> in the US, and finance app <a href="https://www.revolut.com/en-HU/news/revolut_launches_robo_advisor_in_eea_to_automate_investing/">Revolut</a> in Europe, are now democratising investment. These tools are making professional financial insight and portfolio management available to everyone. But although there are plenty of advantages to using robo-advisers, there are downsides too. </p>
<p>Since the 1990s, <a href="https://arxiv.org/pdf/2104.05413.pdf">AI’s role</a> in this sector was typically confined to algorithmic trading and quantitative strategies. These rely on advanced mathematical models to predict stock market movements and trade at lightning speed, far exceeding the capabilities of human traders. </p>
<p>But that laid the groundwork for more advanced applications. And AI has now <a href="https://www.weforum.org/agenda/2017/09/robots-could-plan-your-retirement-financial-advice/">evolved</a> to handle data analysis, predict trends and personalise investment strategies. Unlike traditional investment tools, robo-advisers are more <a href="https://www2.deloitte.com/us/en/insights/industry/financial-services/financial-services-industry-predictions/2023/democratize-financial-services.html">accessible</a>, making them ideal for a new generation of investors. </p>
<p>A survey published in 2023 showed that there has been a particular <a href="https://www.investopedia.com/study-affluent-millennials-are-warming-up-to-robo-advisors-4770577">surge</a> in young people using robo-advisers. Some 31% of gen Zs (born after 2000) and 20% of millennials (born between 1980 and 2000) are using robo-advisers. </p>
<p>Another <a href="https://www.magnifymoney.com/news/robo-advisor-survey/">survey</a> from 2022 found that 63% of US consumers were open to using a robo-adviser to manage their investments. In fact, projections indicate that assets managed by robo-advisers will reach <a href="https://www.statista.com/outlook/fmo/wealth-management/digital-investment/robo-advisors/worldwide">US$1.8 trillion</a> (£1.4 trillion) globally in 2024. </p>
<p>This trend reflects not only changing investor preferences but also how the financial industry is adapting to technology.</p>
<h2>Tailored advice</h2>
<p>AI can <a href="https://www.ftadviser.com/your-industry/2023/07/17/can-generative-ai-truly-replace-a-financial-adviser/">tailor</a> investment advice to a person’s preferences. For example, for investors who want to prioritise ethical investing in environmental, social and governance stocks, AI can tailor a strategy without the need to pay for a financial adviser. </p>
<p>AI can <a href="https://www.sciencedirect.com/science/article/pii/S0275531923000077">analyse</a> news and social media to understand market trends and predict potential movements, offering insights into potential market movements. Portfolios built by robo-advisers may also be <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/poms.14029">more resilient during market downturns</a>, effectively managing risk and protecting investments.</p>
<p>Robo-advisers can offer certain <a href="https://www.ft.com/content/6694bb4a-a585-496a-b7f3-d1841984f9b3">features</a> like reduced investment account minimums and lower fees, which make services more accessible than in the past. Other features such as <a href="https://corporatefinanceinstitute.com/resources/wealth-management/robo-advisors/">tax-loss harvesting</a>, a strategy of selling assets at a loss to reduce taxes, and <a href="https://corporatefinanceinstitute.com/resources/wealth-management/robo-advisors/">periodic rebalancing</a>, which involves adjusting the proportions of different types of investments, make professional investment advice accessible to a wider audience.</p>
<p>These types of innovations are particularly beneficial for people in underserved communities or with limited financial resources. This has the <a href="https://www.brookings.edu/articles/robo-advice-an-effective-tool-to-reduce-inequalities/">potential</a> to improve financial literacy through empowering people to make better financial decisions. </p>
<h2>AI’s multifaced role</h2>
<p>AI’s impact on investment fund management goes way beyond robo-advisers, however. Fund managers are using AI algorithms in a variety of ways. </p>
<p>In terms of data analysis, AI can sift through vast amounts of market data and historical trends to identify <a href="https://doi.org/10.1016/j.frl.2022.102941">ideal assets</a> and adjust portfolios in real time as markets fluctuate. AI is also used to <a href="https://www.sciencedirect.com/science/article/pii/S0378426621002466">improve risk management</a> by analysing complex data and making sophisticated decisions. </p>
<p>By using AI in this way, <a href="https://doi.org/10.1016/j.jedc.2022.104438">traders</a> can react and make faster decisions, which maximises efficiency. Other mundane tasks like <a href="https://ieeexplore.ieee.org/document/9315986">compliance monitoring</a> are increasingly automated by AI. This frees fund managers up to focus on more strategic decisions. </p>
<figure class="align-center ">
<img alt="A close up of a pair of hands holding a mobile phone with pound coins superimposed onto the foreground." src="https://images.theconversation.com/files/580727/original/file-20240308-24-xg6lqw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/580727/original/file-20240308-24-xg6lqw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=350&fit=crop&dpr=1 600w, https://images.theconversation.com/files/580727/original/file-20240308-24-xg6lqw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=350&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/580727/original/file-20240308-24-xg6lqw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=350&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/580727/original/file-20240308-24-xg6lqw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=440&fit=crop&dpr=1 754w, https://images.theconversation.com/files/580727/original/file-20240308-24-xg6lqw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=440&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/580727/original/file-20240308-24-xg6lqw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=440&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">While AI is democratising investing, that comes with challenges.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/double-exposure-uk-stock-graphic-close-792232471">Loch Earn/Shutterstock</a></span>
</figcaption>
</figure>
<h2>What are the disadvantages?</h2>
<p>One of the biggest concerns regarding AI in this sector is based on how having easy access to advanced investment tools may lead some people to overestimate their abilities and take too many financial risks. The sophisticated algorithms used by robo-investors can be opaque, which makes it <a href="https://www.lseg.com/en/insights/data-analytics/how-might-ai-impact-investment-management">difficult</a> for some investors to fully understand the potential risks involved. </p>
<p>Another concern is how the evolution of robo-advisers has outpaced the implementation of <a href="https://fastercapital.com/content/Regulatory-Compliance-in-B2B-Robo-Advisors--Navigating-the-Legal-Landscape.html#Challenges-and-Opportunities">laws and regulations</a>. That could expose investors to financial risks and a lack of legal protection. This is an issue yet to be adequately addressed by financial authorities. </p>
<p>Looking ahead, the future of investment probably lies in a hybrid model. Combining the precision and efficiency of AI with the experience and oversight of human investors is vital.</p>
<p>Ensuring that information is accessible and transparent will be crucial for <a href="https://www.turing.ac.uk/sites/default/files/2021-06/ati_ai_in_financial_services_lores.pdf">fostering</a> a more informed and responsible investment landscape. By harnessing the power of AI responsibly, we can create a financial future that benefits everyone.</p><img src="https://counter.theconversation.com/content/224044/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Robo-advisers and AI are making investing accessible to everyone, but there are also risks to consider.Laurence Jones, Lecturer in Finance, Bangor UniversityHeather He, Lecturer in Data Science/Analytics, Bangor UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2217192024-02-16T02:30:56Z2024-02-16T02:30:56ZFrom Coke cans to shoes to menus: what’s behind the rise in personalised products?<p>Customised shoes, personalised drinks and specialised menu offerings. In a world where carbon copies of products are everywhere, retailers have to make their products stand out and provide customers with a unique purchasing experience.</p>
<p>The need to be different is even greater at a time consumers are being careful about what they spend. Businesses have to work harder as they compete for the all-important dollar so price wars between retailers are common.</p>
<p><a href="https://www.forbes.com/sites/bernardmarr/2023/09/25/the-10-biggest-business-trends-for-2024-everyone-must-be-ready-for-now/?sh=3d7117059ab0">Personalisation</a>, through bespoke products and personalised services has been listed by international business magazine Forbes as one of the ten biggest business trends for 2024.</p>
<p>It’s clear - and has been for years - personalisation appeals to consumers who want to feel cared for and understood by their favourite brands. In fact, consumers are willing to <a href="https://doi.org/10.1177/0022243720943191">pay more</a> for the experience.</p>
<h2>How businesses learn what consumers want</h2>
<p>Companies are increasingly using what marketers call <a href="https://www.bloomreach.com/en/blog/2023/a-marketers-guide-to-personalization-at-scale">personalisation at scale</a> by analysing large amounts of data about individuals to deliver products tailored to their specific needs, behaviours and preferences.</p>
<p>This historical and real-time data is gleaned from consumers’ online purchasing and browsing behaviour, use of mobile apps, internet searches, online shopping carts and brand loyalty cards. </p>
<p>E-commerce retailer Amazon personalises product recommendations based on consumers’ browsing and purchase history, offering them the same or variations of goods they have bought or at least looked at.</p>
<p>Similarly, entertainment streaming platforms Netflix and Spotify analyse their users’ viewing and listening history to understand their preferences and recommend new content.</p>
<p>Coffee giant Starbucks communicates with its loyal members via games in their mobile app and rewards loyalists with specialised offers and exclusive product trials. The games are personalised to each customer based on the data gathered from their past visits and interactions with the app.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/personalised-learning-is-billed-as-the-future-of-schooling-what-is-it-and-could-it-work-194630">Personalised learning is billed as the 'future' of schooling: what is it and could it work?</a>
</strong>
</em>
</p>
<hr>
<p>Coke’s <a href="https://thebrandhopper.com/2023/06/09/branding-case-study-success-of-share-a-coke-campaign/">Share-a-Coke</a> campaign, unveiled in Australia in 2011, was a successful example of the bond brands can create with consumers just by adding a person’s name to the product.</p>
<p>The <a href="https://www.historyoasis.com/post/share-a-coke">company branded</a> its bottles and cans with the 150 most popular names in Australia and urged consumers to share a Coke with someone whose name adorned the label. The list of names later expanded.</p>
<p><a href="https://www.loreal.com/en/beauty-science-and-technology/beauty-tech/reinventing-the-beauty/">L’Oreal’s</a> most recent innovation is their in-store technology that digitally scans each customer’s skin. The data obtained is used to produce a customised foundation (from 72,000 possible combinations) to match an individual’s shade, level of hydration and coverage required.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/575238/original/file-20240213-18-wlrtzf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Image of a lipsticks, foundation and other make up on a product stand in a department store" src="https://images.theconversation.com/files/575238/original/file-20240213-18-wlrtzf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/575238/original/file-20240213-18-wlrtzf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=451&fit=crop&dpr=1 600w, https://images.theconversation.com/files/575238/original/file-20240213-18-wlrtzf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=451&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/575238/original/file-20240213-18-wlrtzf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=451&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/575238/original/file-20240213-18-wlrtzf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/575238/original/file-20240213-18-wlrtzf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/575238/original/file-20240213-18-wlrtzf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Cosmetics giant L'Oreal uses AI to produce customised make-up for its customers.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/bangkokthailand12-december-2019loreal-paris-brand-products-1588108621">chanonnat srisura/Shutterstock</a></span>
</figcaption>
</figure>
<p><a href="https://www.mytotalretail.com/article/3-retail-trends-transforming-the-industry-in-2024/">Nike</a> produces custom shoes in thousands of styles, colours and icon combinations as they continue to acquire data integration platforms that help speed up the collection and analysis of consumer data.</p>
<h2>Consumers want more from their shopping experience</h2>
<p>In pre-digital times, personalisation was based on broad demographics and direct feedback from customers. It often resulted in personalised store interactions between salespeople and VIP customers, or tailoring store services. Personalisation was only affordable to high-net-worth individuals.</p>
<p>But the digital age has made personalisation accessible to all consumers, not just the high end. Today’s shoppers expect unique experiences and will vote with their dollar. This is backed by <a href="https://www.mckinsey.com/industries/retail/our-insights/personalizing-the-customer-experience-driving-differentiation-in-retail">research</a> showing personalised experiences drive company sales.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/we-are-closer-than-ever-to-being-able-to-3d-print-medicines-heres-why-thats-important-208026">We are closer than ever to being able to 3D print medicines. Here's why that's important</a>
</strong>
</em>
</p>
<hr>
<p>The <a href="https://www.forbes.com/sites/forrester/2023/05/19/three-consumer-behaviors-that-emerged-during-the-pandemic-are-persisting-despite-the-end-of-the-covid-19/?sh=fcaa9995d748">COVID-19 pandemic</a> only made personalisation more urgent for companies as consumers switched to new stores, products, or buying methods, proving brand loyalty was a thing of the past.</p>
<p>Consumers now expect more value from brands. They want to feel recognised and understood on an individual level and not part of the crowd. Personalisation at-scale allows consumers to feel empowered with their choices. This feeling of <a href="https://doi.org/10.1509/jmkg.74.1.65">psychological ownership</a> results from designing your “own” product and can lead to greater value and brand <a href="https://www.sciencedirect.com/science/article/abs/pii/S1057740811000829">love</a>.</p>
<h2>Why personalisation works for the big brands</h2>
<p>Personalisation at scale offers companies <a href="https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/marketings-holy-grail-digital-personalization-at-scale">many advantages</a>. It can reduce customer acquisition costs and increase revenues. Personalising experiences, when offered to millions of customers, make it difficult for competitors to imitate, especially when brands use proprietary technology.</p>
<p>Personalisation also means less waste as brands produce what consumers <em>actually</em> want rather than what they <em>think</em> consumers want. After all, consumers who find products unique to them are less likely to part with what they believe is their own creation.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/575243/original/file-20240213-18-tbgpbk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="An iPhone showing the Starbucks app" src="https://images.theconversation.com/files/575243/original/file-20240213-18-tbgpbk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/575243/original/file-20240213-18-tbgpbk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/575243/original/file-20240213-18-tbgpbk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/575243/original/file-20240213-18-tbgpbk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/575243/original/file-20240213-18-tbgpbk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/575243/original/file-20240213-18-tbgpbk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/575243/original/file-20240213-18-tbgpbk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Starbuck’s gathers information about its customers’ preferences through its app.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/search/starbucks-app">Robert Way/Shutterstock</a></span>
</figcaption>
</figure>
<p>However, using predictive algorithms to help brands analyse past behaviours (what you and others like you have bought/watched) and come up with choices (at scale) can be imperfect.</p>
<p>Dating app Tinder’s reliance on algorithms to decide which photos users see has been criticised as <a href="https://www.wired.co.uk/article/dating-app-algorithms">flawed</a> with very low reciprocal interest rates between users “swiping right”. Understanding human behaviour requires intuition alongside algorithms.</p>
<h2>If personalisation isn’t new, then why the sudden hype?</h2>
<p>Brands are rapidly embracing digital disruption. The digital revolution brought an influx of consumer data, but despite early algorithms, it was difficult for companies to make sense of large amounts of raw data.</p>
<p>Artificial Intelligence (AI) and machine learning have revolutionised this by enabling brands to use <a href="https://www.forbes.com/sites/adrianswinscoe/2023/12/18/15-customer-experience-predictions-for-2024/?sh=6256fbfb3e11">AI-driven methods</a> to understand their consumers and offer tailored content. In turn, consumers get to contribute to their product’s design.</p>
<p>Big brands like Nike and L’Oreal have the right formula for personalisation and their customers are enjoying a unique experience. This is good news for big brands with large budgets and access to data, but less so for smaller brands with fewer resources trying to compete for the customer’s attention.</p>
<p>With the growth of AI technology, we will start seeing open-source software with publicly accessible data that allows even the smallest brands access and know-how to make every experience bespoke.</p><img src="https://counter.theconversation.com/content/221719/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marian Makkar does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Companies are using artificial intelligence to personalise products as consumers become more demanding about what they expect from big brands.Marian Makkar, Senior Lecturer in Marketing, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2233112024-02-13T19:06:47Z2024-02-13T19:06:47ZAI tools produce dazzling results – but do they really have ‘intelligence’?<p>Sam Altman, chief executive of ChatGPT-maker OpenAI, is reportedly trying to find <a href="https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0">up to US$7 trillion</a> of investment to manufacture the enormous volumes of computer chips he believes the world needs to run artificial intelligence (AI) systems. Altman also recently said <a href="https://www.reuters.com/technology/openai-ceo-altman-says-davos-future-ai-depends-energy-breakthrough-2024-01-16/">the world will need more energy</a> in the AI-saturated future he envisions – so much more that some kind of technological breakthrough like nuclear fusion may be required.</p>
<p>Altman clearly has big plans for his company’s technology, but is the future of AI really this rosy? As a long-time “artificial intelligence” researcher, I have my doubts.</p>
<p>Today’s AI systems – particularly generative AI tools such as ChatGPT – are not truly intelligent. What’s more, there is no evidence they can become so without fundamental changes to the way they work.</p>
<h2>What is AI?</h2>
<p>One definition of AI is a computer system that can “<a href="https://www.britannica.com/technology/artificial-intelligence">perform tasks commonly associated with intelligent beings</a>”. </p>
<p>This definition, like many others, is a little blurry: should we call spreadsheets AI, as they can carry out calculations that once would have been a high-level human task? How about factory robots, which have not only replaced humans but in many instances surpassed us in their ability to perform complex and delicate tasks?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/not-everything-we-call-ai-is-actually-artificial-intelligence-heres-what-you-need-to-know-196732">Not everything we call AI is actually 'artificial intelligence'. Here's what you need to know</a>
</strong>
</em>
</p>
<hr>
<p>While spreadsheets and robots can indeed do things that were once the domain of humans, they do so by following an algorithm – a process or set of rules for approaching a task and working through it.</p>
<p>One thing we can say is that there is no such thing as “an AI” in the sense of a system that can perform a range of intelligent actions in the way a human would. Rather, there are many different AI technologies that can do quite different things.</p>
<h2>Making decisions vs generating outputs</h2>
<p>Perhaps the most important distinction is between “discriminative AI” and “generative AI”. </p>
<p>Discriminative AI helps with making decisions, such as whether a bank should give a loan to a small business, or whether a doctor diagnoses a patient with disease X or disease Y. AI technologies of this kind have existed for decades, and bigger and better ones are <a href="https://www.fastcompany.com/90927119/why-discriminative-ai-will-continue-to-dominate-enterprise-ai-adoption-in-a-world-flooded-with-discussions-on-generative-ai">emerging all the time</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-is-everywhere-including-countless-applications-youve-likely-never-heard-of-222985">AI is everywhere – including countless applications you've likely never heard of</a>
</strong>
</em>
</p>
<hr>
<p>Generative AI systems, on the other hand – ChatGPT, Midjourney and their relatives – generate outputs in response to inputs: in other words, they make things up. In essence, they have been exposed to billions of data points (such as sentences) and use this to guess a likely response to a prompt. The response may often be “true”, depending on the source data, but there are no guarantees. </p>
<p>For generative AI, there is no difference between a “hallucination” – a false response invented by the system – and a response a human would judge as true. This appears to be an inherent defect of the technology, which uses a kind of neural network called a transformer. </p>
<h2>AI, but not intelligent</h2>
<p>Another example shows how the goalposts of “AI” are constantly moving. In the 1980s, I worked on a computer system designed to provide expert medical advice on laboratory results. It was written up in the US research literature as <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-0394.1986.tb00192.x">one of the first four</a> medical “expert systems” in clinical use, and in 1986 an Australian government report described it as the most successful expert system developed in Australia. </p>
<p>I was pretty proud of this. It was an AI landmark, and it performed a task that normally required highly trained medical specialists. However, the system wasn’t intelligent at all. It was really just a kind of look-up table which matched lab test results to high-level diagnostic and patient management advice. </p>
<p>There is now technology which makes it very easy to build such systems, so there are thousands of them in use around the world. (This technology, based on research by myself and colleagues, is provided by an Australian company called Beamtree.)</p>
<p>In doing a task done by highly trained specialists, they are certainly “AI”, but they are still not at all intelligent (although the more complex ones may have thousands and thousands of rules for looking up answers).</p>
<p>The transformer networks used in generative AI systems still run on sets of rules, though there may be millions or billions of them, and they cannot easily be explained in human terms. </p>
<h2>What is real intelligence?</h2>
<p>If algorithms can produce dazzling results of the kind we see from ChatGPT without being intelligent, what is real intelligence?</p>
<p>We might say intelligence is insight: the judgement that something is or is not a good idea. Think of Archimedes, leaping from his bath and shouting “Eureka” because he had had an insight into the principle of buoyancy.</p>
<p>Generative AI doesn’t have insight. ChatGPT can’t tell you if its answer to a question is better than Gemini’s. (Gemini, until recently known as Bard, is Google’s competitor to OpenAI’s GPT family of AI tools.)</p>
<p>Or to put it another way: generative AI might produce amazing pictures in the style of Monet, but if it were trained only on Renaissance art it would never invent Impressionism.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="an Impressionist painting of water lilies on a pond." src="https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=534&fit=crop&dpr=1 600w, https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=534&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=534&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=671&fit=crop&dpr=1 754w, https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=671&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/575175/original/file-20240213-28-zm7k3i.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=671&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Nympheas (Waterlilies)</span>
<span class="attribution"><a class="source" href="https://artsandculture.google.com/asset/0gEk3X6Bn40QKg">Claude Monet / Google Art Project</a></span>
</figcaption>
</figure>
<p>Generative AI is extraordinary, and people will no doubt find widespread and very valuable uses for it. Already, it provides extremely useful tools for transforming and presenting (but not discovering) information, and tools for turning specifications into code are already in routine use. </p>
<p>These will get better and better: Google’s just-released Gemini, for example, appears to try to <a href="https://fortune.com/2023/12/07/google-launches-deepmind-ai-gemini-chatgpt-openai-factuality-hallucination/">minimise the hallucination problem</a>, by using search and then re-expressing the search results. </p>
<p>Nevertheless, as we become more familiar with generative AI systems, we will see more clearly that it is not truly intelligent; there is no insight. It is not magic, but a very clever magician’s trick: an algorithm that is the product of extraordinary human ingenuity.</p><img src="https://counter.theconversation.com/content/223311/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Paul Compton was a founder of Pacific Knowledge Systems, later renamed Beamtree, but no longer has any involvement
with the company.</span></em></p>Existing AI systems learn patterns from very large piles of data – but they have no insight.Paul Compton, Emeritus professor in Computer Science and Engineering, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2021782023-11-05T18:15:02Z2023-11-05T18:15:02ZTwo faces of dignity: a Kantian perspective on Uber drivers’ fight for decent working conditions<p>On November 3, 2016, Emmanuel Macron, who had recently launched a presidential bid, mentioned what he felt was Uber’s positive role in providing work opportunities to low-income or unemployed youth (our translation and emphasis):</p>
<blockquote>
<p>“You go to Stains [a low-income town outside of Paris] to tell young people who are Uber drivers that it is better to loiter or deal […]. Our collective failure is that the neighbourhoods where Uber hires these young people are neighbourhoods where we haven’t managed to offer them anything else. Yes, they sometimes work 60 to 70 hours to get the minimum wage, but they return with dignity, they find a job, they put on a suit and a tie.”</p>
</blockquote>
<p>A year later, the perspective of many Uber drivers in Paris was quite different, as witnessed by a handout distributed by an activist group in November 2017:</p>
<blockquote>
<p>“You’ve been used by Uber, regain your dignity!” (“UberUsé, regagne ta dignité!”)</p>
</blockquote>
<h2>Dignity as work</h2>
<p>These two quotes refer to quite distinct concepts of dignity. On the one hand, French president Emmanuel Macron tells unemployed youth from low-income towns they ought to consider themselves lucky when Uber offers them the opportunity to don a suit and a tie and get behind the wheel. On the other, Uber drivers see themselves as being exploited by management and are ready to put up a fight to regain their dignity. So does Uber restore or take away workers’ dignity?</p>
<p>The French president’s notion of dignity is what some philosophers refer to as <em>social standing dignity</em>, the traditional conception (Sensen 2011). Rooted in an individual’s rank or office, it centres on the world of behavioural rules, rights and duties that surround these positions.</p>
<p>Hierarchical societies are structured through higher and lower social positions and with each one comes different ranks and different degrees of dignity. Thus, Macron contends that young people from poor areas are better off by taking on work from Uber, even if this means long hours and low wages. Here, employment is presented as the fundamental condition to social dignity.</p>
<h2>Migrant roots</h2>
<p>It is important to note that most people who take on an Uber job hail from a migrant background, sometimes stretching back to several generations. In France, these are mainly from North and Sub-Saharan Africa. As the <a href="https://www.puf.com/content/UberUs%C3%A9s">sociological research from Sophie Bernard shows</a>, most were not unemployed before. Instead, they took on unskilled, low-paying, painful, and precarious jobs – quite a different situation to trafficking drugs or loitering. They became Uber drivers to improve their condition by gaining freedom and higher wages.</p>
<p>But they soon realised they were subjected to a new form of algorithmic management and forced to work more and more to earn less and less. This form of control is exercised remotely and indirectly by algorithms that enable the quasi-automatic supervision of many workers. Drivers are rated by customers for every journey they make. All it takes is one complaint from a customer for their account to be deactivated. Uber drivers are no longer subject to hierarchical control, but rather to customer demands. Nor are they totally free to organise their working hours as they see fit. To entice drivers to work for Uber, the company first offered them bonuses and high remuneration. Once the platform has enough drivers, <a href="https://www.puf.com/content/UberUs%C3%A9s">it removes the bonuses, lowers the fares and increases the commission</a>.</p>
<p>While they thought they were improving their conditions, they found themselves once again in another job as exploited migrants. As if Macron were telling them: “We have this opportunity for you to gain your social dignity with a job that other people in our society don’t want and don’t need, but it’s good enough for you.”</p>
<h2>Kant’s concept of equal moral worth</h2>
<p>The second notion of dignity is that of human dignity, the idea that was implemented into the 1948 Universal Declaration of Human Rights and into many constitutions after the Second World War. It is expressed in Kant’s idea of equal moral worth of all human beings. In his famous <a href="https://plato.stanford.edu/entries/persons-means/"><em>Formula of Humanity</em></a> of the Categorical Imperative, Kant states:</p>
<blockquote>
<p>“So act that you use humanity, whether in your own person or in the person of any other, always at the same time as an end, never merely as a means.”</p>
</blockquote>
<p>Is that the notion Uber drivers can refer to? As we will see it is, but it needs some clarification, and Kantian philosophy has its blind spots when it comes to dignity violations. What does it mean to use someone merely as a means? Kantians think that you are used as a mere means if you cannot (reasonably) consent to the treatment of others. This is especially so if your will is manipulated by deception or coercion. According to Kant, this is addressed by the criteria of deception and coercion that manipulate or enforce consent. Now one could wonder what the problem is from a Kantian perspective, since Uber drivers took on the job willingly, as Macron emphasises.</p>
<p>And indeed, Kant did not think in categories like <em>exploitation</em>. We think that exploitation can also be understood in terms of instrumentalisation. The accusation that Uber drivers formulate: “UberUsé” refers directly to this: not to be used merely as a means to another’s purposes; not to be exploited, in the sense that platform capitalism puts you in a position where long working hours don’t give you the minimum wage, where you take all the risks for a platform that reaps all the benefits, where there’s no reasonable alternative for you and where there’s reasonable alternatives to pay you a decent wage for Uber, since their profit would allow for it. Let’s remember that while Uber defines drivers as self-employed workers who provide the platform with labour and part of the production tools, it is the platform that sets the prices and takes a commission on each trip by passing on all the risks.</p>
<p>Moreover, there is another problem, and this one cannot be captured by the Kantian prohibition of instrumentalisation. It is the unequal social positions in a hierarchical and racist society that lead to an inequality of opportunity. This goes against the Kantian requirement to treat others as ends in themselves: as persons with equal moral standing. Degradation of migrants in the form: “this job is good enough for you” contradicts that requirement. So what Uber drivers could see violated on Kantian terms is their human dignity, their equal moral standing, that would recommend to provide them with equal opportunities in the French society and not just with opportunities that are “good enough for them” because “their” social standing is already at the bottom.</p>
<p>What is striking about how Uber drivers’ striving for social dignity can be abused when it comes to exploitation of their work force. As they fight Uber’s working conditions, they are more faithful to Western Kant-induced values than Macron. The president, by contrast, offers them a glimpse of social dignity in a kind of job that keeps them in an exploitative and precarious situation. One could say in the spirit of Kant that Uber drivers show self-esteem by their protest which aims at (re)gaining their dignity. Kant states in the Doctrine of Virtue: “Do not let others tread with impunity on your rights.”</p>
<p>As <a href="https://link.springer.com/article/10.1007/s10677-022-10288-7">Mieth and Williams argue</a>, there are wrongs beyond instrumentalisation when it comes to migration, which concern exclusion and inequality. Under the circumstances Uber drivers find themselves in, they put on a fight to express their human dignity, not their social dignity in Macron’s terms. But this human dignity implies social dignity in another sense: to be acknowledged as an equal member of society which implies equality of opportunity. So we think that Uber drivers’ fight to regain dignity is in line with Kant’s notion of human dignity. Their protest is even giving the notion of equal human dignity reality.</p><img src="https://counter.theconversation.com/content/202178/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Corinna Mieth a reçu des financements de Fondation Maison de science de l'homme (FMSH) et du Kant-Zentrum NRW. </span></em></p><p class="fine-print"><em><span>Sophie Bernard a reçu des financements de l'Institut Universitaire de France. </span></em></p>With an eye to Kant’s work, a philosopher and a sociologist argue that the Uber project robs drivers of their dignity.Corinna Mieth, Legal and political philosopher, Fondation Maison des Sciences de l'Homme (FMSH)Sophie Bernard, Sociologue, professeure des universités, Université Paris Dauphine – PSLLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2166472023-11-02T14:23:30Z2023-11-02T14:23:30ZSocial media content in times of war: an expert guide on how to keep violence off your feeds<figure><img src="https://images.theconversation.com/files/556623/original/file-20231030-25-2np8f3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">There are some practical ways to filter the amount of violent and graphic content you see on social media.</span> <span class="attribution"><span class="source">bubaone</span></span></figcaption></figure><p>Social media platforms are a great source of information and entertainment. They also help us to maintain contact with friends and family. But social media can also – <a href="https://theconversation.com/mounting-research-documents-the-harmful-effects-of-social-media-use-on-mental-health-including-body-image-and-development-of-eating-disorders-206170">and has</a>, <a href="https://doi.org/10.1093/joc/jqab034">often</a> – become a toxic environment for spreading disinformation, hatred and conflict. </p>
<p>Most people can’t or don’t want to opt out of social media. Efforts by courts and <a href="https://foreignpolicy.com/2022/04/25/the-real-threat-to-social-media-is-europe/">state bodies</a> to regulate or control it are slowly catching up, but so far have been unsuccessful. And social media companies have a record of <a href="https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/">prioritising engagement</a> over social benefit.</p>
<p>Users are left with a dilemma: how to benefit from social media without exposing themselves to distressing, damaging or illegal content. This becomes even more of an issue in times of heightened global tension and conflict. Both the conflict in Ukraine and now the Gaza War have increased the risk of seeing <a href="https://www.npr.org/2023/10/24/1208165068/graphic-videos-and-images-of-the-israel-hamas-war-are-flooding-social-media">horrifying and damaging images</a> on one’s feed. </p>
<p>This article, based on <a href="https://orcid.org/0000-0001-5171-663X">my research</a> on news on social media, is a guide to curating and editing your social media feeds to ensure that the content you see is suited to your needs and is not offensive or disturbing. </p>
<p>It is organised into the broadest social media categories. I’m not covering newer services such as <a href="https://www.threads.net/login">Threads</a>, <a href="https://mastodon.social/explore">Mastodon</a>, <a href="https://post.news/feed">Post</a> and <a href="https://bsky.app/">Bluesky</a>, although the principles are generally applicable. I have focused on using these apps on a mobile phone, because that’s what <a href="https://www.pewresearch.org/global/2022/12/06/internet-smartphone-and-social-media-use-in-advanced-economies-2022/">the majority of users</a> do, rather than using them on a web browser. I am concentrating mostly on video content.</p>
<p>Social media can be a powerful tool for information and learning, but it is a flawed one. Whatever approach you take to managing your feeds, remain cautious and sceptical. Pay attention to updates to policies and user agreements and consider carefully who you trust and follow. </p>
<h2>Your choice or theirs?</h2>
<p>Many social networks offer an algorithmically selected feed as your first point of contact. The specifics of the algorithms are not publicly known and the companies refine them constantly. The feed is largely based on your location and the topics and people you have expressed an interest in previously (whether following, or simply having watched or interacted with the content). It may also include other information such as your age and gender, which you may have previously given the service. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/algorithms-are-moulding-and-shaping-our-politics-heres-how-to-avoid-being-gamed-201402">Algorithms are moulding and shaping our politics. Here's how to avoid being gamed</a>
</strong>
</em>
</p>
<hr>
<p>Organisations and individuals invest money and time in ensuring that their content will be seen. Advertisers will also pay to have their content shown to customers who meet their criteria. It is also important to remember that paid content is not just goods and services for sale, but may be a political or social agenda – often one that is hidden. This is the basis of <a href="https://link.springer.com/article/10.1007/s13278-023-01028-5">fake news and deliberate misinformation</a>.</p>
<p>Here are a few ways to manage your social media feeds.</p>
<h2>Be careful who you follow</h2>
<p>On all networks except TikTok, the key is carefully selecting the people you follow.</p>
<p>On Twitter (X) the best option is to move away from the “for you” page (which is the default view) and focus on the “following” page. You can’t remove the “for you” page entirely. The “following” feed includes everyone you follow, their tweets and their retweets. </p>
<p>If you are seeing content you don’t want to, you can unfollow, block or mute them.</p>
<p>The simplest way to clean up your Facebook news feed is to “unfriend” accounts. Another option is to “unfollow” someone: you remain friends, they can see your content and engage with it, but their posts won’t appear in your feed unless they mention you or you seek it out. Or you can “take a break” from someone, which is a kind of temporary block. Blocking is the most extreme option. It will remove them and all of their content and hide all of yours from them.</p>
<p>Instagram offers similar options to unfollow and mute (similar to Facebook’s “take a break” option).</p>
<p>TikTok has only limited options for users to filter or curate their feeds. The “following” page only shows creators you are following (and ads). It isn’t and can’t be set as the default view.</p>
<p>The “for you” page is entirely algorithm driven. Clicking on a creator only allows you to follow them, not to hide or block them. You can, however, block specific users. Click on their profile, then the share icon. “Report” and “block” are below the various share options. Blocking removes their content, but not other users’ content that features them.</p>
<h2>Explore your settings</h2>
<p>Many platforms have options for limiting violent or graphic content. On Facebook this is buried in the Settings menu. From there, click on News Feed, then Reduce. You can’t remove this content, but you can move it down in your feed. </p>
<p>On TikTok, long pressing on the screen brings up the options panel. From there you can report a video; there’s also a “not interested” option to remove that video and others with similar hashtags from your feed. If you click on “details” to see which hashtags will be filtered, you can select specific ones to block. It’s not clear how reliable this is, however – hashtags change over time. A number of hashtags apparently can’t be filtered, but it’s not clear what these are or why they can’t be filtered.</p>
<p>The “content preferences” option under “settings” allows you to filter video keywords. That removes them from your “for you” page, your “following” page, or both.</p>
<p>You can also set TikTok to “restricted mode”. This limits access to “unsuitable content” – an opaque description.</p>
<h2>User beware</h2>
<p>This is not a perfect guide, since social media is not designed to be controlled by the user. These companies are based on user engagement: the more time you spend on their app, the more money they make. They’re not particularly interested in ensuring the content is helpful or accurate.</p><img src="https://counter.theconversation.com/content/216647/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Megan Knight does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Whatever approach you take to managing your feeds, remain cautious and sceptical.Megan Knight, Associate Dean, University of HertfordshireLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2163212023-10-27T14:27:50Z2023-10-27T14:27:50ZHow to redesign social media algorithms to bridge divides<p>Social media platforms have been implicated in conflicts of all scales, from <a href="https://www.theatlantic.com/magazine/archive/2023/09/jarell-jackson-shahjahan-mccaskill-killed-philadelphia-social-media/674760/">urban gun violence</a> to the <a href="https://www.washingtonpost.com/technology/2023/01/17/jan6-committee-report-social-media/">storming of the US Capitol building</a> on January 6 and <a href="https://documents-dds-ny.un.org/doc/UNDOC/GEN/N16/350/68/PDF/N1635068.pdf?OpenElement">civil war in South Sudan</a>. Scientifically, it is <a href="https://theconversation.com/misinformation-why-it-may-not-necessarily-lead-to-bad-behaviour-199123">difficult to tell</a> how much social media can be blamed for one-off incidents. </p>
<p>But in much the way that climate change increases the risk of extreme weather, evidence suggests that current algorithms (which mostly <a href="https://medium.com/understanding-recommenders/how-platform-recommenders-work-15e260d9a15a">optimise for engagement</a>) raise the political “temperature” by disproportionately surfacing inflammatory content. This <a href="https://arxiv.org/abs/2305.16941">may make people angrier</a>, increasing the risk that social differences <a href="https://knightcolumbia.org/content/the-algorithmic-management-of-polarization-and-violence-on-social-media">escalate to violence</a>.</p>
<p>But what if we redesigned social media to bridge divides? “<a href="https://www.belfercenter.org/publication/bridging-based-ranking">Bridging-based ranking</a>” is an alternative kind of algorithm for ranking content in social media feeds that explicitly aims to build mutual understanding and trust across differing perspectives.</p>
<p>The core logic of bridging-based ranking has already been used on <a href="https://bridging.systems/facebook-papers/">Facebook</a> and <a href="https://communitynotes.twitter.com/guide/en/about/introduction">X</a> (formerly known as Twitter), albeit not in the main feed. It is also used in <a href="https://pol.is/home">Polis</a>, an online platform for collecting public input, used by several governments to inform policymaking on polarised topics. </p>
<p>There are many open questions, but evidence from existing uses of bridging-based ranking suggests that changes to algorithms may <a href="https://arxiv.org/abs/2307.13912">reduce partisan animosity</a> and <a href="https://bridging.systems/facebook-papers/">improve the quality and inclusiveness</a> of online interactions.</p>
<p>People are increasingly looking for alternative algorithms. Regulators <a href="https://techcrunch.com/2023/08/25/quiet-qutting-ai/">in the EU</a> and new platforms <a href="https://blueskyweb.xyz/blog/3-30-2023-algorithmic-choice">such as Bluesky</a> are giving users choice regarding which algorithm determines what they see, and recent <a href="https://www.science.org/content/article/does-social-media-polarize-voters-unprecedented-experiments-facebook-users-reveal">large-scale experiments on Facebook</a> have tested different options.</p>
<p>If we care about social cohesion, then during this period of “shopping around” we need to seriously consider alternatives such as bridging.</p>
<h2>How it works</h2>
<p>Current <a href="https://medium.com/understanding-recommenders/how-platform-recommenders-work-15e260d9a15a">engagement-based algorithms</a> make predictions about which posts are most likely to generate clicks, likes, shares or views – and use these predictions to rank the most engaging content at the top of your feed. This tends to amplify the most polarising voices, because divisive perspectives are very engaging.</p>
<p><a href="https://bridging.systems/">Bridging-based ranking</a> uses a different set of signals to determine which content gets ranked highly. One approach is to increase the rank of content that receives positive feedback from people who normally disagree. This creates an incentive for content producers to be mindful of how their content will land with “the other side”.</p>
<p>Among the <a href="https://bridging.systems/facebook-papers/">internal Facebook documents</a> leaked by whistleblower Frances Haugen in 2021, there is evidence that Facebook tested this approach for ranking comments. </p>
<p>Comments with positive engagement from diverse audiences were found to be of higher quality, and “much less likely” to be reported for bullying, hate or inciting violence. A similar strategy is used in <a href="https://communitynotes.twitter.com/guide/en/about/introduction">Community Notes</a>, a crowd-sourced fact checking feature on X, to identify notes that are helpful to people on both sides of politics.</p>
<p>This pattern of “diverse positive feedback” is the most widely implemented approach to bridging. Others include <a href="https://arxiv.org/abs/2307.13912">lowering the ranking</a> of content that promotes partisan violence, or using surveys to shape algorithms so that they increase the ranking of content according to <a href="https://www.wired.com/story/platforms-engagement-research-meta/">how it makes users feel in the long term</a>, rather than the short term.</p>
<p>Conflict is an important part of society, and in many cases, a key driver of <a href="https://www.jstor.org/stable/586859">political and social change</a>. The goal of bridging is not to eliminate conflict or disagreement, but to promote constructive forms of conflict.</p>
<p>This is known as <a href="https://www.beyondintractability.org/essay/transformation">conflict transformation</a>. Professional mediators, facilitators and “peacebuilders”, who work with opposing groups, have a detailed understanding of <a href="https://knightcolumbia.org/content/the-algorithmic-management-of-polarization-and-violence-on-social-media">how conflicts escalate</a>. They also know how to structure communication between opposing groups in ways that build mutual understanding and trust.</p>
<p>Research on bridging-based ranking can draw on this, taking insights from conflict management in the physical world and <a href="https://scripties.uba.uva.nl/search?id=record_24357">translating</a> them <a href="https://howtobuildup.medium.com/archetypes-of-polarization-on-social-media-d56d4374fb25">into digital systems</a>. </p>
<p>For example, facilitating contact between people from rival groups in “opt in”, non-threatening settings <a href="https://doi.org/10.1016/j.ijintrel.2011.03.001">can reduce prejudice</a>, and we <a href="https://doi.org/10.1073/pnas.2311627120">can</a> <a href="https://www.nature.com/articles/s41562-023-01655-0">design</a> social platforms to create these conditions online.</p>
<h2>Why should big tech adopt this?</h2>
<p>Firms such as Meta have built their fortune on the “attention economy” and content which promotes short-term engagement, and hence revenue.</p>
<p>We simply don’t yet know the extent to which the goals of bridging and engagement are in tension. If you talk to people who work at social media platforms, they will tell you that when well-intended changes to the algorithm are tested, user engagement sometimes drops initially, but then slowly rebounds over time, ultimately ending up with more engagement.</p>
<p>The problem is, platforms normally get cold feet and cancel experiments before they can observe such long-term benefits. Evidence we <em>do</em> have from <a href="https://bridging.systems/facebook-papers/">leaked Facebook papers</a> suggests that incorporating bridging <a href="https://youtu.be/ePh_DVi3dMM">improves the user experience</a>.</p>
<p>Bridging-based ranking might also have benefits beyond engagement. By reducing <a href="https://lukethorburn.com/files/BridgingBasedRanking-PluralitySpringSymposium.pdf#page=13">toxicity</a> and content that <a href="https://bridging.systems/facebook-papers/">violates community guidelines</a>, it would likely reduce the need for costly content moderation.</p>
<p>Demonstrating a willingness to make their algorithms less divisive would also build goodwill among regulators, reducing the risk of reputational and legal damage. For example, Facebook has been heavily criticised for allegedly facilitating incitements to violence in <a href="https://www.bbc.co.uk/news/world-asia-46105934">Myanmar</a>, <a href="https://www.theguardian.com/world/2018/mar/07/sri-lanka-blocks-social-media-as-deadly-violence-continues-buddhist-temple-anti-muslim-riots-kandy">Sri Lanka</a>, and <a href="https://www.theguardian.com/technology/2022/dec/14/meta-faces-lawsuit-over-facebook-posts-inciting-violence-in-tigray-war">Ethiopia</a>. </p>
<p>It has subsequently faced lawsuits from victims and communities, who have sought <a href="https://www.theguardian.com/technology/2021/dec/06/rohingya-sue-facebook-myanmar-genocide-us-uk-legal-action-social-media-violence">up to £150 billion</a> in damages.</p>
<h2>Questions and challenges</h2>
<p>Important questions around bridging-based ranking remain, and we set out many of these in a <a href="https://knightcolumbia.org/content/bridging-systems">recent paper</a> published with the Knight First Amendment Institute, which publishes original scholarship and policy papers relating to the defence of freedoms of speech and the press in the digital age. </p>
<p>Which divides should be bridged? Are there unintended consequences – for example, amplifying mainstream views at the expense of minority viewpoints? How can decisions about the design of mass communication technologies be made democratically?</p>
<p>Bridging is not a panacea. There is only so much algorithmic changes can do to address societal conflict, which is a result of complex factors such as inequality. But by recognising that digital platforms are reshaping society, we have an obligation to guide that process in an ethical, humanistic direction that brings out the best in us.</p>
<p>It falls to both the tech companies that built these systems and an engaged public to create technologies designed for social cohesion. With care, wisdom and democratic oversight, we can foster online communities that reflect our better sides. But we have to make that choice.</p><img src="https://counter.theconversation.com/content/216321/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Aviv Ovadya is affiliated with the the Berkman Klein Center at Harvard, the AI & Democracy Foundation, the newDemocracy Foundation, and the Centre for Governance of AI. </span></em></p><p class="fine-print"><em><span>Luke Thorburn does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Algorithms have been blamed for dividing society. What if they could support social cohesion instead?Luke Thorburn, PhD Candidate in Safe and Trusted AI, King's College LondonAviv Ovadya, Affiliate at the Berkman Klein Center for Internet & Society, Harvard UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2145252023-10-10T17:00:39Z2023-10-10T17:00:39ZAI: we may not need a new human right to protect us from decisions by algorithms – the laws already exist<figure><img src="https://images.theconversation.com/files/552765/original/file-20231009-17-qwgeso.jpg?ixlib=rb-1.1.0&rect=32%2C8%2C5431%2C3628&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Hiring algorithms could filter candidates before interviews even take place.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/back-view-female-candidate-apply-position-1191901783">fizkes / Shutterstock</a></span></figcaption></figure><p>There are risks and harms that come with relying on algorithms to make decisions. People are already feeling the impact of doing so. Whether <a href="https://www.science.org/doi/10.1126/science.aax2342">reinforcing racial biases</a> or <a href="https://www.pbs.org/newshour/politics/ai-generated-disinformation-poses-threat-of-misleading-voters-in-2024-election">spreading misinformation</a>, many technologies that are labelled as artificial intelligence (AI) help amplify age-old malfunctions of the human condition.</p>
<p>In light of such problems, calls have been made to <a href="https://academic.oup.com/ejil/article-abstract/32/4/1249/6448877">create a new human right</a> against being subject to automated decision-making (ADM), which the UK <a href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/individual-rights/automated-decision-making-and-profiling/what-is-automated-individual-decision-making-and-profiling/">Information Commissioner’s Office (ICO) describes</a> as “the process of making a decision by automated means without any human involvement”. </p>
<p>Such systems rely on being exposed to data, whether factual, inferred, or created via profiling. But if effective regulation of ADM is the goal, creating new laws is probably not the way to go. </p>
<p><a href="https://academic.oup.com/ijlit/article/31/2/114/7227602">Our research</a> suggests we should consider a different approach. Legal frameworks for data protection, non-discrimination, and human rights already offer protection to people from the negative impacts of ADM. Rules from these bodies of law can also guide regulation more generally. We could therefore focus on ensuring that the laws we already have are properly implemented.</p>
<h2>Current harms and future risks</h2>
<p>Automated decision making is being used in various ways – and there are more applications on the way. Areas subject to automation include the processing of asylum and welfare support applications and the deployment of lethal military technology. But even where ADM is considered to bring benefits, it can also have negative effects.</p>
<p>The criminalisation of children is one possible risk of using certain ADM systems, where “<a href="https://academic.oup.com/hrlr/article-abstract/22/1/ngab028/6438104">predictive risk models</a>” used in child protection services can result in vulnerable children being further discriminated against. ADM can also make securing work harder –- a hiring algorithm <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/1468-2230.12759">developed by Amazon</a> “scored female applicants more poorly than their equivalently qualified male counterparts.” </p>
<p>In several countries, including the UK, courts also rely on ADM. For example, it’s used to make sentencing recommendations, calculate the <a href="https://theconversation.com/a-black-box-ai-system-has-been-influencing-criminal-justice-decisions-for-over-two-decades-its-time-to-open-it-up-200594">probability of a person reoffending</a>, and assess the flight risk of defendants, which determines whether they will be released on bail pending trial. </p>
<p>These applications can result in unfair processes and unjust outcomes for many reasons. This could happen because a judge unwittingly accepts erroneous results produced by ADM, or because no one is able to understand how or why a particular system arrived at its conclusion. </p>
<p>Historically, human prejudices <a href="https://www.science.org/doi/10.1126/science.aax2342">have also been embedded</a> in the design of such software. This is because the algorithms are trained on real world data, often from the internet. Exposing the system to this information may help improve their performance at a task from one perspective, but the data also reflects people’s biases. This means that members of marginalised groups can end up being punished, in the way we saw earlier when women were disadvantaged by a hiring algorithm. </p>
<h2>Protection and regulation</h2>
<p>The urge to adopt new legal rules is perhaps understandable considering the stakes and the potential harm ADM could and does do. However, as regards creating a new human right, negotiating new laws takes time, money and resources. And once any new law comes into force it can take decades to be accurately understood for the purposes of practice.</p>
<p>Given that many relevant laws already exist, it’s unclear whether a new human right would significantly influence how systems for automated decision making are designed and deployed.</p>
<p>Yet without tangible implementation and enforcement, the content of these existing laws can become hollow. Effective governance of ADM by these laws requires <a href="https://oecd.ai/en/catalogue/tools/algorithmic-impact-assessment-tool">impact assessments of automated decisions</a>, human supervision of ADM systems, and complaints processes. These should all be mandated. A thorough impact assessment will be able to identify, for example, unintended harms to individuals and groups, and help shape appropriate mitigation measures. </p>
<p>Yet these information gathering measures need to be <a href="https://judicature.duke.edu/wp-content/uploads/2021/04/Sales_Spring2021.pdf">accompanied by sufficient oversight</a> by a competent, resourced, and – possibly – public body. This would help uphold democratic accountability. Such bodies would also be tasked with ensuring that people negatively affected by ADM could file complaints that are adequately dealt with. These steps would make current laws on data protection, non-discrimination, and human rights more meaningful and effective in protecting individuals and groups from the harms of automated decisions.</p>
<p>The law across many areas is often criticised – sometimes rightly – for struggling to adapt to change. But a merit of the law in general is its ability to provide recourse to people that have experienced wrongdoing. It provides principled teeth to take a bite out of unprincipled conduct. </p>
<p>This capacity is significant for another reason. Corporate spin regarding digital technologies matches how they are often portrayed in public. Commentary, too, frequently tends towards “<a href="https://www.tandfonline.com/doi/full/10.1080/13642987.2023.2227100">hyperbole, alarmism, or exaggeration</a>”. This hype complements practices such as ethics-washing that provide a means of feigning commitment to regulation, while ignoring the very laws capable of providing it.</p>
<p>Chatter about the likes of <a href="https://www.unesco.org/en/artificial-intelligence/recommendation-ethics">“AI ethics”</a> grease the wheels of these strategies, sometimes turning nuanced and significant philosophical insights into box-ticking exercises. Ethics are an essential component of guiding the design, development, and deployment of automated decision making. However, the language of “ethics” can also be used by spin doctors to <a href="https://www.technologyreview.com/2019/12/27/57/ai-ethics-washing-time-to-act/">distract us</a>.</p>
<p>If anything here is worth remembering, it’s that ADM is not only a future problem, it’s a present problem. The laws that exist now can be used to address pressing issues stemming from this technology. </p>
<p>Whether this happens depends on public and private bodies improving the procedural machinery needed to enforce and oversee legal rules. These rules, many of which have been around for a while, just need a bit more life breathed into them to function effectively.</p><img src="https://counter.theconversation.com/content/214525/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Richard Mackenzie-Gray Scott receives funding from the British Academy, and is Visiting Professor at the Center for Technology and Society, Getulio Vargas Foundation.</span></em></p><p class="fine-print"><em><span>Elena Abrusci does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Effective implementation of existing law can protect us from the risks posed by AI algorithms.Elena Abrusci, Senior lecturer in Law, Brunel University LondonRichard Mackenzie-Gray Scott, Postdoctoral Fellow, Bonavero Institute of Human Rights, University of OxfordLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2110932023-09-26T22:51:51Z2023-09-26T22:51:51ZFamily vlogs can entertain, empower and exploit<figure><img src="https://images.theconversation.com/files/548388/original/file-20230914-27-rfrjml.jpg?ixlib=rb-1.1.0&rect=0%2C23%2C5329%2C3523&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Family vlogs can be a double-edged sword that provide families with income, but also lead to exploitation.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/family-vlogs-can-entertain-empower-and-exploit" width="100%" height="400"></iframe>
<p>YouTube channels belonging to American content creator Ruby Franke were recently <a href="https://globalnews.ca/news/9960389/ruby-franke-youtube-kevin-jodi-hildebrandt/">scrubbed from the site</a> after the YouTuber was charged with child abuse. Franke was known for making parenting videos on her YouTube channel, 8 Passengers. Her videos frequently featured content on the family and her six children.</p>
<p>Police in Utah said the charges were laid after Franke’s 12-year-old son <a href="https://www.sltrib.com/news/politics/2023/09/05/heres-what-we-know-about-arrest/">climbed out of the window</a> of a home and went to a neighbour to ask for food and water. Police said the boy and his younger sister were found emaciated and required hospitalization. </p>
<p>As blogs and live journals gather internet dust, <a href="https://www.wix.com/blog/photography/how-to-vlog">vlogging</a> has emerged as a new source of intimate entertainment, and for creators, potential income. However, they also raise serious questions about exploitation and the privacy rights of children.</p>
<h2>What is vlogging?</h2>
<p>Vlogs are videos, usually published through social media, that share the creator’s personal thoughts and experiences. Family vlogs like Franke’s are a popular form of this medium, where parents take viewers into their homes. The content might involve taking viewers along on the family’s daily routine. Family vlogging channels upload videos sharing <a href="https://www.youtube.com/watch?v=cq1hI0Mmyic">significant milestones</a>, <a href="https://www.youtube.com/watch?v=OxUHjIFkeIk&t=401s">morning routines</a> and <a href="https://www.youtube.com/watch?v=KkpvqOUrWec">preparing for school</a>. </p>
<p>Many might feel uneasy about <a href="https://theconversation.com/want-to-be-a-social-media-influencer-you-might-want-to-think-again-203306">content creation</a> that showcases private family life. However, at the same time, vlogs might offer families agency and alternative means of making ends meet at a time of stagnant wages and soaring living costs.</p>
<p>Thinking about vlogging as a kind of social reproduction allows us to think through the double-edged sword of content creation. Social reproduction refers to the labour of <a href="https://doi.org/10.1111/1467-8330.00207">lifemaking</a>: the day-to-day work of care, education and sustenance. <a href="https://doi.org/10.1177/0309132518791730">Feminist theorists</a> use this term to think about the ways in which caring labour supports and shapes our social, political and economic world.</p>
<p>Social reproduction is “<a href="https://doi.org/10.1111/1467-8330.00207">the fleshy, messy and indeterminate stuff of everyday life</a>.” It involves the responsibilities and relationships involved in maintaining daily life.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/544800/original/file-20230825-21-qhucf7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man and two young children sit in front of cameras and a laptop." src="https://images.theconversation.com/files/544800/original/file-20230825-21-qhucf7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/544800/original/file-20230825-21-qhucf7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/544800/original/file-20230825-21-qhucf7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/544800/original/file-20230825-21-qhucf7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/544800/original/file-20230825-21-qhucf7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/544800/original/file-20230825-21-qhucf7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/544800/original/file-20230825-21-qhucf7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Many might feel uneasy about content that showcases private family life. However, vlogs offer alternative means of making ends meet.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>A response to the pressures of parenting</h2>
<p>Family vlogging did not develop in a vacuum. Instead, the trend towards “mumpreneurs” emerged from within a <a href="https://newleftreview.org/issues/ii100/articles/nancy-fraser-contradictions-of-capital-and-care">care crisis</a>. The cost of living is rising, wages are stagnating, and government benefits do not provide the support families need. Parents — and mothers in particular — are facing significant pressures when it comes to caring for children and the household.</p>
<p>There has been a rise in gender equity in the workforce, however there is still <a href="https://theconversation.com/we-can-we-reduce-gender-inequality-in-housework-heres-how-58130">huge inequity</a> when it comes to work in the home. Women are working unprecedented (paid and unpaid) hours, and are often being told they are <a href="https://www.sfu.ca/vancity-office-community-engagement/below-the-radar-podcast/series/women-work-more/143-amanda-watson.html">failing at both</a>.</p>
<p>As a response to these pressures, mothers developed their own online communities to express the <a href="https://jarm.journals.yorku.ca/index.php/jarm/article/view/40238">highs and lows of parenting</a>. These communities began as <a href="https://doi.org/10.1080/1369118X.2016.1187642">“mommy blogs,”</a> but have increasingly moved to vlog format over the years. </p>
<p>Family vlogs can offer intimate counter-narratives to the expectations of parenthood. Mothers can share <a href="https://doi.org/10.1177/17504813221123663">the anxieties and pressures they face</a> and offer support to one another.</p>
<h2>Commodifying families</h2>
<p>However, there can be downsides to the trend. Many family vlogs are highly curated productions that can perpetuate ideas about what constitutes “good” motherhood, rather than challenge racialized, gendered and classist <a href="https://doi.org/10.1177/2056305117707186">ideals of motherhood</a>. In this way, vlogs are less about connection and more about commodification.</p>
<p>The implications of this monetization are complex. Performing <a href="https://doi.org/10.1093/ccc/tcy008">socially desirable</a> forms of motherhood can reproduce racial, sexual and class-based exclusion around who does and who does not count as a good mother. Dominant ideas of “motherhood” are shaped by heterosexual family structures, and there is a <a href="https://www.penguinrandomhouse.com/books/37354/women-race-and-class-by-angela-y-davis/">long history</a> of surveilling and <a href="https://utorontopress.com/9781442691520/exalted-subjects/">disciplining</a> racialized parents.</p>
<p>YouTube <a href="https://support.google.com/youtube/answer/72851">creators</a> depend on <a href="https://www.youtube.com/intl/en_ph/creators/how-things-work/video-monetization/">viewership and subscribers</a> to monetize their content. They also use YouTube advertisements, sponsorships and brand deals to generate income. While some creators can make millions of dollars, most do not. Many are precarious workers with fluctuating incomes determined by <a href="https://support.google.com/youtube/answer/141805#zippy=%2Chow-does-youtube-choose-what-videos-to-promote%2Chow-are-videos-ranked-on-home">YouTube’s algorithm</a>. </p>
<p>On the other hand, content creation allows mothers to rebel against economic insecurity by making their motherhood a source of income. While this offers a means of paying the bills, who benefits and who doesn’t when a certain version of the family is commodified? </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/544801/original/file-20230825-15-k4cmur.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man and a young girl preparing food in a kitchen while a smartphone films" src="https://images.theconversation.com/files/544801/original/file-20230825-15-k4cmur.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/544801/original/file-20230825-15-k4cmur.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/544801/original/file-20230825-15-k4cmur.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/544801/original/file-20230825-15-k4cmur.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/544801/original/file-20230825-15-k4cmur.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/544801/original/file-20230825-15-k4cmur.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/544801/original/file-20230825-15-k4cmur.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Many content creators are dependent on social media algorithms that determine what content gets the most views.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Kids and clickbait: What is the law?</h2>
<p>Exploitation is twofold for family vloggers. Firstly, in the United States, parents are considered responsible for protecting their underage children’s privacy information and consent. Many influencers live or move to the U.S. for <a href="https://www.cbc.ca/player/play/1987946563736">creator funds</a> and better networking opportunities. This can become an issue when <a href="https://theconversation.com/why-arent-there-any-legal-protections-for-the-children-of-influencers-196463">parents exploit their children</a> while also being <a href="https://www.newsweek.com/youtube-lets-lawless-lucrative-sharenting-industry-put-kids-mercy-internet-1635112">in charge of providing consent</a>. </p>
<p>Secondly, <a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45530.pdf">social media algorithms</a> determine whether a video becomes popular on a platform, which <a href="https://www.youtube.com/intl/en_ca/creators/how-things-work/content-creation-strategy/">prioritizes content that gains the most views</a>.</p>
<p>The algorithms can <a href="https://theconversation.com/want-to-be-a-social-media-influencer-you-might-want-to-think-again-203306">change without warning</a>, so creators never know if their content will remain popular. If family vloggers choose to stop showcasing their children on their channels, they might <a href="https://www.popsugar.com/family/posting-kids-faces-social-media-privacy-49045872">lose viewership</a> and priority within the algorithm.</p>
<p>Existing U.S. laws are unequipped to handle this new form of child labour. <a href="https://www.washingtonpost.com/history/2023/08/25/illinois-child-influencer-earnings-law-history-jackie-coogan/">The Coogan Act</a> attempts to protect the income of child performers, but it does not account for the unique conditions of child social media stars. </p>
<p>Most recently, <a href="https://www.nbcnews.com/news/child-influencers-law-illinois-reaction-rcna99831">Illinois is the first U.S. state</a> to pass a law to ensure child influencers featured in monetized videos receive financial compensation. The law will take effect in July 2024, and there is hope that other states will follow suit. </p>
<p>This is a good start, but it is not enough. Policymakers should also look at the steps France has taken to protect child influencers. In 2020, the country passed a law that gives children the <a href="https://www.bbc.com/news/world-europe-54447491">right to be forgotten</a>. This means that child influencers can request that the platform removes content featuring them without their parent’s permission.</p>
<p>Laws need to include more than financial compensation for child influencers. There need to be regulations protecting children’s privacy, rights to have content removed and preventing children from being overworked. There also needs to be a call for greater regulation and transparency of social media algorithms that control and manipulate what is profitable.</p>
<p>Whether it is entertainment, exploitation or employment, family vlogging is a reminder of the complex interconnections between care work and wage work. As the households of strangers stream across our screens, parents and lawmakers must think carefully about the impacts on families and children.</p><img src="https://counter.theconversation.com/content/211093/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rebecca Hall receives funding from the Social Sciences and Humanities Research Council.</span></em></p><p class="fine-print"><em><span>Christina Pilgrim does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Vlogging has emerged as a new source of intimate entertainment, and for creators, potential income. However, they also raise serious questions about exploitation and the privacy rights of children.Rebecca Hall, Assistant Professor, Global Development Studies, Queen's University, OntarioChristina Pilgrim, Master's student, Department of Sociology, Queen's University, OntarioLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2112632023-09-06T21:21:02Z2023-09-06T21:21:02ZYour iPhone will soon be able to track your mental health with iOS 17, but what are the implications for your well-being?<figure><img src="https://images.theconversation.com/files/546529/original/file-20230905-19-uo066u.jpg?ixlib=rb-1.1.0&rect=157%2C44%2C4730%2C3263&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A new mood tracker will ask users to rate how they feel both daily and in random moments.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/your-iphone-will-soon-be-able-to-track-your-mental-health-with-ios-17-but-what-are-the-implications-for-your-well-being" width="100%" height="400"></iframe>
<p>When Apple’s <a href="https://www.apple.com/ca/newsroom/2023/06/apple-previews-new-features-coming-to-apple-services-this-fall/">latest software updates</a> drop this month, users will have access to mental <a href="https://www.apple.com/newsroom/2023/06/apple-provides-powerful-insights-into-new-areas-of-health/">health and wellness</a> features unlike anything currently available in a smartphone. With the Apple Watch and iOS health app, Apple has long striven to <a href="https://www.reuters.com/technology/apple-outlines-health-technology-strategy-new-report-2022-07-20/">cement itself in the health-care tech space</a>. But the new features go beyond the standard heart rate, sleep, calorie and fitness trackers that have become universal in smart tech. </p>
<p>A new mood tracker (dubbed “State of Mind”) will ask users to rate how they feel both in random moments (from unpleasant to pleasant) and daily. Mental health questionnaires will provide users with a preliminary screening for depression (using the <a href="https://doi.org/10.3928/0048-5713-20020901-06">PHQ-9 screening tool</a>) and anxiety (using the <a href="https://doi.org/10.1001/archinte.166.10.1092">GAD-7 screening tool</a>) that can alert them to their risk levels and connect them to licensed professionals in their area.</p>
<p>Finally, Apple is introducing a journaling app that can collect user data from photos, texts, music/gaming/TV history, location, fitness etc. to give users a holistic picture of each day. </p>
<p>Those who use Apple’s <a href="https://research-methodology.net/apple-ecosystem-closed-effective/">ecosystem</a> know that it’s <a href="https://slate.com/technology/2021/06/apple-wwdc-ios15-new-features-walled-garden.html">extensive and powerful</a>, and true Apple devotees will use an Apple product for nearly every digital experience they have.</p>
<p>This means Apple is in the position to arrive at unique insights about a user’s life. What they’re proposing in iOS 17 is to essentially hold a mirror up to their users, allowing them to see their lives through their interactions with technology. </p>
<h2>Tracking mental state</h2>
<p>As a philosopher of psychology who studies how technology is changing the way people relate to their mental health, and as an avid Apple fan, I wanted to try out these new features as soon as possible. I downloaded the public beta software in July and want to share my insights about how we might approach this new technology.</p>
<p>The State of Mind tool is simple to use. When opening the Health App after updating to iOS 17, I was prompted to start tracking my mental state. I can choose to log a state at a specific time (for example, how did I feel at 2:30 p.m. today?), or to log my mental state for the day. </p>
<p>The sliding scale of mental states is visually appealing. The screen turns blue when I slide to the “unpleasant” options and orange when I slide to the “pleasant” options. </p>
<p>After settling on a mental state, users are prompted to give some context. </p>
<p>First, there’s a predetermined list of emotions that might describe the user’s mental state (for example, “anxious,” “content,” “happy,” “excited”), and then a list of factors that might be contributing to that mental state (such as “work,” “friends,” “current events”). Here users can write in something specific that will be included in the log. </p>
<p>If they use it daily, users can access a calendar of daily mental states and a graph that visualizes the cycle of states over a given week, month or year. Clicking on any data point will pull up the details of that day, any momentary moods the user logged and the context the user provided. </p>
<p>The user interface functions similarly to the other health metrics Apple already logs. It is a minimalist design that offers easily digestible data. Users can access mental state metrics on the home screen of the app with their other health data. </p>
<figure class="align-left ">
<img alt="A screenshot of the State of Mind graph presented with the author's exercise data over the past month." src="https://images.theconversation.com/files/542365/original/file-20230811-25-7c96ld.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/542365/original/file-20230811-25-7c96ld.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1299&fit=crop&dpr=1 600w, https://images.theconversation.com/files/542365/original/file-20230811-25-7c96ld.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1299&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/542365/original/file-20230811-25-7c96ld.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1299&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/542365/original/file-20230811-25-7c96ld.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1633&fit=crop&dpr=1 754w, https://images.theconversation.com/files/542365/original/file-20230811-25-7c96ld.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1633&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/542365/original/file-20230811-25-7c96ld.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1633&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Mood data can be presented alongside exercise minutes, inviting users to draw conclusions about them.</span>
<span class="attribution"><span class="source">(Owen Chevalier)</span></span>
</figcaption>
</figure>
<p>When using the mental well-being features, I can’t help but think the introduction of them is a step closer to <a href="https://plato.stanford.edu/entries/enhancement">transhumanism</a>, which is the amalgamation of humans and technology, and eventual replacement of the human body with technology. </p>
<p>Instead of just measuring physical fitness (tracking workouts, counting calories), the iPhone and Apple Watch can be holistic measures of me as a person. They can define not only my active life but also my mental life. I can scroll through an Apple-branded definition of who I am. Eventually, I can become the Apple ideal version of myself. </p>
<p>On the surface, it is helpful to see that I often rate days more highly when I’m active and sleep enough (although it doesn’t take AI to know that). However, as a researcher I know that there’s a limit to what data can tell us, based on the measurements we use and our <a href="https://plato.stanford.edu/entries/scientific-knowledge-social/#SciSoc">biases as interpreters</a>.</p>
<p>I wonder how the average Apple user will interpret this data, and whether they will start shaping their lives to arrive at graphs that look desirable. </p>
<p>The late philosopher Ian Hacking describes a <a href="https://www.thebritishacademy.ac.uk/documents/2043/pba151p285.pdf">looping effect</a> between people and the labels they’re given. Looping effects are prominent in the algorithm-driven software we use. Researchers have found people’s TikTok feeds become <a href="https://doi.org/10.5210/spir.v2020i0.11172">reflections of their self-concept</a> as they begin to trust the insights AI draws from the feedback they’ve given. </p>
<p>However, TikTok algorithms are not blank slates for self-concept creation. They’re designed to <a href="https://www.nytimes.com/2021/12/05/business/media/tiktok-algorithm.html">put people into marketing categories to sell them to advertisers</a>.</p>
<p>Apple isn’t trying to <a href="https://www.wired.com/story/apple-privacy-data-collection">sell your data</a>; its <a href="https://www.apple.com/legal/privacy/pdfs/apple-privacy-policy-en-ww.pdf">privacy policy</a> states, “Apple does not share personal data with third parties for their own marketing purposes.” But its health app reflects its corporate mandates and the world it wants to create. </p>
<p>In an <a href="https://time.com/5472329/apple-watch-ecg/">interview with <em>Time</em></a>, Apple CEO Tim Cook said, “Apple’s largest contribution to mankind will be in improving people’s health and well-being.” </p>
<p>Apple is a company of ideals. Compared to traditional computer marketing, which highlights performance specs, Apple pioneered selling computers by advertising who a user can be with a Mac. This was the purpose behind their <a href="https://www.cultofmac.com/441206/today-in-apple-history-its-time-to-think-different/">“Think Different”</a> campaign. Even when Apple does discuss technical details of computer performance, their use of flashy visuals and vague language makes it <a href="https://www.youtube.com/watch?v=b6g6rDDt9x8">difficult to accurately assess</a> their products against competitors.</p>
<figure class="align-center ">
<img alt="Chart comparing the CPU Performance of Apple's M1 chip against other laptops." src="https://images.theconversation.com/files/545391/original/file-20230829-16-4esk5c.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/545391/original/file-20230829-16-4esk5c.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=301&fit=crop&dpr=1 600w, https://images.theconversation.com/files/545391/original/file-20230829-16-4esk5c.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=301&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/545391/original/file-20230829-16-4esk5c.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=301&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/545391/original/file-20230829-16-4esk5c.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=379&fit=crop&dpr=1 754w, https://images.theconversation.com/files/545391/original/file-20230829-16-4esk5c.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=379&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/545391/original/file-20230829-16-4esk5c.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=379&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">While Apple provides graphs like these, they do not provide enough information to be valuable as a comparison tool. Instead, they reflect Apple’s branding and are marketed to users who may not be concerned with the details of computer performance.</span>
<span class="attribution"><a class="source" href="https://www.apple.com/ca/newsroom/2020/11/apple-unleashes-m1/">(Apple)</a></span>
</figcaption>
</figure>
<p>The messaging is clear: An Apple user is not just someone who owns a piece of tech, but someone who is cool, creative, colourful and individualistic. Now they can be healthy and well-adjusted, too. </p>
<p>But corporate mandates can be hollow because at their core they exist to increase profits. Apple’s success as a company comes from its ability to <a href="https://doi.org/10.1016/j.accfor.2013.06.003">own the consumer</a>. </p>
<p>With an airtight ecosystem, users become dependent on Apple for all their digital needs. By integrating health into that ecosystem, those users may be dependent on Apple for their well-being too. I’m not sure what happens when people incorporate their Apple self into their self-concept, but it might make them better consumers and more productive employees. Ultimately, this is the goal of <a href="https://www2.deloitte.com/content/dam/Deloitte/ca/Documents/about-deloitte/ca-en-about-blueprint-for-workplace-mental-health-final-aoda.pdf">corporate mental health</a>. </p>
<p>Just as spa days and five-minute yoga breaks can only go so far in improving mental health, it’s not clear that iOS 17 is the medical revolution Apple hopes it will be.</p><img src="https://counter.theconversation.com/content/211263/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Owen Chevalier does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>New features on Apple iOS 17 aim to give users insights into their mental health, but they may also shape how people see themselves.Owen Chevalier, PhD Student, Philosophy Department, Western UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2117782023-08-24T12:19:30Z2023-08-24T12:19:30ZFor minorities, biased AI algorithms can damage almost every part of life<figure><img src="https://images.theconversation.com/files/543810/original/file-20230821-23-5oh5nq.jpg?ixlib=rb-1.1.0&rect=18%2C0%2C6265%2C3556&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">shutterstock</span> </figcaption></figure><p>Bad data does not only produce bad outcomes. It can also help to suppress sections of society, for instance vulnerable women and minorities. </p>
<p>This is the argument of <a href="https://www.bloomsbury.com/us/is-artificial-intelligence-racist-9781350374423/">my new book</a> on the relationship between various forms of racism and sexism and artificial intelligence (AI). The problem is acute. Algorithms generally need to be exposed to data – often taken from the internet – in order to improve at whatever they do, such as <a href="https://www.theguardian.com/us-news/2022/may/11/artitifical-intelligence-job-applications-screen-robot-recruiters">screening job applications</a>, or underwriting mortgages. </p>
<p>But the training data often contains many of the biases that exist in the real world. For example, algorithms could learn that most people in a particular job role are male and therefore favour men in job applications. Our data is polluted by a set of myths from the age of <a href="https://en.wikipedia.org/wiki/Age_of_Enlightenment#:%7E:text=The%20Enlightenment%20included%20a%20range,separation%20of%20church%20and%20state.">“enlightenment”</a>, including biases that lead to <a href="https://www.gaytascience.com/transphobic-algorithms/">discrimination based on gender and sexual identity</a>.</p>
<p>Judging from the history in societies where racism has played a role in
<a href="https://sk.sagepub.com/books/racism-from-slavery-to-advanced-capitalism">establishing the social and political order</a>, extending privileges to white males –- in Europe, North America and Australia, for instance –- it is simple science to assume that residues of racist discrimination feed into our technology.</p>
<p>In my research for the book, I have documented some prominent examples. Face recognition software <a href="https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/">more commonly misidentified black and Asian minorities</a>, leading to false arrests in the US and elsewhere. </p>
<p>Software used in the criminal justice system has predicted that black offenders would have <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">higher recidivism rates</a> than they did. There have been false healthcare decisions. <a href="https://www.science.org/doi/10.1126/science.aax2342">A study found that</a> of the black and white patients assigned the same health risk score by an algorithm used in US health management, the black patients were often sicker than their white counterparts. </p>
<p>This reduced the number of black patients identified for extra care by more than half. Because less money was spent on black patients who have the same level of need as white ones, the algorithm falsely concluded that black patients were healthier than equally sick white patients. Denial of mortgages for minority populations is facilitated by biased data sets. The list goes on. </p>
<h2>Machines don’t lie?</h2>
<p>Such oppressive algorithms intrude on almost every <a href="https://www.newscientist.com/article/mg25033390-200-the-essential-guide-to-the-algorithms-that-run-your-life/">area of our lives</a>. AI is making matters worse, as it is sold to us as essentially unbiased. We are told that machines don’t lie. Therefore, the logic goes, no one is to blame. </p>
<p>This pseudo-objectiveness is central to the AI-hype created by the Silicon Valley tech giants. It is easily discernible from the speeches of Elon Musk, Mark Zuckerberg and Bill Gates, even if now and then they <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">warn
us about the projects</a> that they themselves are responsible for.</p>
<p>There are various unaddressed legal and ethical issues at stake. Who is accountable for the mistakes? Could someone claim compensation for an algorithm denying them parole based on their ethnic background in the same way that one might for a toaster that exploded in a kitchen?</p>
<p>The <a href="https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained#:%7E:text=This%20inability%20for%20us%20to,when%20they%20produce%20unwanted%20outcomes.">opaque nature of AI technology</a> poses serious challenges to legal systems which have been built around individual or human accountability. On a more fundamental level, basic human rights are threatened, as legal accountability is blurred by the maze of technology placed between perpetrators and the various forms of discrimination that can be conveniently blamed on the machine.</p>
<p>Racism has always been a systematic strategy to order society. It builds, legitimises and enforces hierarchies between the haves and have nots.</p>
<h2>Ethical and legal vacuum</h2>
<p>In such a world, where it’s difficult to disentangle truth and reality from untruth, our privacy needs to be legally protected. The right to privacy and the concomitant ownership of our virtual and real-life data needs to be codified as a human right, not least in order to harvest the real opportunities that good AI harbours for human security.</p>
<p>But as it stands, the innovators are far ahead of us. Technology has outpaced
legislation. The ethical and legal vacuum thus created is readily exploited by criminals, as this brave new AI world is largely anarchic. </p>
<p>Blindfolded by the mistakes of the past, we have entered a wild west without any sheriffs to police the violence of the digital world that’s enveloping our everyday lives. The tragedies are already happening on a daily basis.</p>
<p>It is time to counter the ethical, political and social costs with a concerted social movement in support of legislation. The first step is to educate ourselves about what is happening right now, as our lives will never be the same. It is our responsibility to plan the course of action for this new AI future. Only in this way can a good use of AI be codified in local, national and global institutions.</p><img src="https://counter.theconversation.com/content/211778/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Arshin Adib-Moghaddam does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Data used to train AI systems often reflects the racism inherent in society.Arshin Adib-Moghaddam, Professor in Global Thought and Comparative Philosophies, SOAS, University of LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2114482023-08-23T11:07:16Z2023-08-23T11:07:16ZWe’re talking about AI a lot right now – and it’s not a moment too soon<figure><img src="https://images.theconversation.com/files/542343/original/file-20230811-23-omh1qf.jpg?ixlib=rb-1.1.0&rect=33%2C0%2C7372%2C4008&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/ai-technology-artificial-intelligence-man-using-2263545623">LookerStudio / Shutterstock</a></span></figcaption></figure><p>When OpenAI unchained the “beast” that is ChatGPT <a href="https://venturebeat.com/ai/chatgpt-launched-six-months-ago-its-impact-and-fallout-is-just-%20beginning-the-ai-beat/">back in November 2022</a>, the pace of market competition between tech companies involved in AI increased exponentially.</p>
<p>Market competition determines the price of goods and services, their quality and the speed of innovation – which has been remarkable in the AI industry. However, some experts believe we are <a href="https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-%20chatgpt-built-openai/">deploying the most powerful technology</a> in the world <a href="https://time.com/6281737/ai-we-cant-trust-big-tech-gary-marcus/">far too quickly</a>.</p>
<p>This could hamper our ability to detect serious problems before they’ve caused damage, resulting in profound implications for society, particularly when we can’t anticipate the capabilities of something that may end up having the ability to train itself.</p>
<p>But AI is nothing new – and while ChatGPT may have taken many people by surprise, the seeds of the current commotion over this technology were laid years ago.</p>
<h2>Is AI new?</h2>
<p>The origins of modern AI can be traced back to developments in the 1950s when Alan Turing worked to solve complex mathematical problems to <a href="https://qbi.uq.edu.au/brain/intelligent-machines/history-artificial-intelligence">test machine intelligence</a>. </p>
<p>Limited resources and computational power available at the time hindered growth and adoption. But breakthroughs in machine learning, neural networks, and data availability fuelled a resurgence of AI around the early 2000s. That prompted many industries to embrace AI. The finance and telecommunications sectors used it for <a href="https://www.mckinsey.com/featured-insights/artificial-intelligence/the-promise-and-%20challenge-of-the-age-of-artificial-intelligence">fraud detection and data analytics</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/OQSMr-3GGvQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">TED talk by journalist Carole Cadwalladr on the topic of AI.</span></figcaption>
</figure>
<p>An explosion of data, <a href="https://dev.to/aws-builders/the-role-of-ai-in-cloud-computing-a-beginners-guide-to-%20starting-a-career-%204h2#:%7E:text=AI%20and%20cloud%20computing%20work,deploy%20AI%20models%20at%20scale.">the development</a> of <a href="https://medium.com/@raosrinivas2580/how-cloud-computing-influences-artificial-intelligence-%205f1a8a2f2d5a">cloud computing</a> and the availability of huge computing resources all later facilitated the development of AI algorithms. This significantly shaped what could be done with – for example, image and video recognition and targeted advertising.</p>
<p>Why is AI getting so much attention now? AI has long been used in social media, to recommend relevant posts, articles, videos, and ads. The technology ethicist Tristan Harris says social media is broadly humanity’s “first contact” with AI.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1677070546717278209"}"></div></p>
<p>And humanity has learned that AI-driven algorithms on social media platforms can spread <a href="https://www.apa.org/topics/journalism-facts/misinformation-disinformation">disinformation and misinformation</a> – polarising public opinion and <a href="https://reutersinstitute.politics.ox.ac.uk/echo-chambers-filter-bubbles-and-polarisation-literature-review#header--4">fostering online echo chambers</a>. Campaigns spent money on targeting voters online in both the 2016 US presidential election and <a href="https://committees.parliament.uk/committee/378/digital-culture-media-and-sport-%20committee/news/103668/fake-news-report-published-17-19/%5D">the UK Brexit vote</a>.</p>
<p>Both events led to public awareness about AI and how technology could be used to manipulate political outcomes. These high-profile incidents <a href="https://www.thetimes.co.uk/article/yoshua-bengio-ai-safety-artificial-intelligence-%20x9mknfnr5">set in motion concerns</a> about the capabilities of evolving technologies.</p>
<p>However, in 2017, a <a href="https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/">new class of AI emerged</a>. This technology is known as a transformer. It’s a machine learning model which processes language and then uses that to produce its own text and have conversations. </p>
<p>This breakthrough facilitated the creation of large language models such as ChatGPT, which can understand and generate text which resembles that written by humans. Transformer-based models such as OpenAI’s GPT (Generative Pre-trained Transformer) have demonstrated impressive capabilities in <a href="https://towardsdatascience.com/what-is-gpt-3-and-why-is-it-so-powerful-21ea1ba59811">generating coherent and relevant text</a>.</p>
<p>The difference with transformers is that, as they absorb new information, they learn from it. This potentially allows them to gain new capabilities that engineers did not programme into them.</p>
<h2>Bigger issue</h2>
<p>The processing power now available and the capabilities of the latest AI models mean that as-yet unresolved concerns around the impact of social media on society – especially on younger generations – will only grow.</p>
<p>Lucy Batley, the boss of Traction Industries, a private-sector company which helps businesses integrate AI into their operations, says that the type of analysis that social media companies can carry out on our personal data – and the detail they can extract – is “going to be automated and accelerated to a point where big tech moguls will potentially know more about us than we consciously do about ourselves”. </p>
<p>But <a href="https://www.forbes.com/sites/qai/2023/01/24/quantum-computing-is-coming-and-its-reinventing-the-tech-industry/">quantum computing</a>, which has experienced major breakthroughs in recent years, may far <a href="https://www.independent.co.uk/tech/google-quantum-computer-apocalypse-encryption-password-security-b2393516.html">surpass the performance</a> of conventional computers on particular tasks. Batley believes this would “allow the development of much more capable AI systems to probe multiple aspects of our lives”.</p>
<p>The situation for “big tech” and the countries that are leading in AI can be likened to what game theorists call the “prisoner’s dilemma”. This is a condition where two parties must either decide to work together to solve a problem, or betray each other. They face a tough choice between an event where one party gains – keeping in mind betraying often yields a higher reward – or one <a href="https://plato.stanford.edu/entries/prisoner-dilemma/">with the potential for mutual benefit</a>. </p>
<p>Let’s take a scenario where we have two competing tech companies. They need to decide whether they should cooperate by sharing their research on cutting-edge technology or keep their research secret. If both companies collaborate, they could make significant advancements together. However, if Company A shares while Company B doesn’t, Company A probably loses its competitive edge.</p>
<p>This is not too dissimilar from the current situation that the US finds itself in. The US is trying to accelerate AI to beat foreign competition. As such policymakers have been slow to discuss AI regulation, which would help protect society from harms caused by use of the technology.</p>
<h2>Uncharted territory</h2>
<p>This potential for AI to create societal problems must be averted. We have a duty to understand them and we need a collective focus to avoid the mistakes that have previously been made with social media. We were too late to regulate social media. By the time that conversation entered the public domain, social platforms had already entangled themselves with the media, elections, businesses and users’ lives. </p>
<p>The first major global summit on AI safety is planned for later <a href="https://www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-%20intelligence">this year, in the UK</a>. This is an opportunity for policymakers and world leaders to consider the immediate and future risks of AI and how these risks can be mitigated via a globally coordinated approach. This is also a chance to invite a broader range of voices from society to discuss this significant issue, resulting in a more diverse array of perspectives on a complex matter that will affect everyone.</p>
<p>AI has huge potential to increase the quality of life on Earth, but we all have a duty to help encourage the development of responsible AI systems. We must also collectively push for brands to operate with ethical guidelines within regulatory frameworks. The best time to influence a medium is at the very start of its journey.</p><img src="https://counter.theconversation.com/content/211448/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kimberley Hardcastle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The seeds of the current commotion over AI were laid years ago.Kimberley Hardcastle, Assistant Professor in Marketing, Northumbria University, NewcastleLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2111722023-08-21T12:25:01Z2023-08-21T12:25:01ZSocial media algorithms warp how people learn from each other, research shows<figure><img src="https://images.theconversation.com/files/543348/original/file-20230817-21-haki9e.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5455%2C3612&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Social media pushes evolutionary buttons.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/IndiaNewInternetRules/c9d25794d9254a9ab63672ec0e896af5/photo">AP Photo/Manish Swarup</a></span></figcaption></figure><p>People’s daily interactions with online algorithms <a href="https://doi.org/10.1016/j.tics.2023.06.008">affect how they learn from others</a>, with negative consequences including social misperceptions, conflict and the spread of misinformation, my colleagues and I have found.</p>
<p>People are increasingly interacting with others in social media environments where algorithms control the flow of social information they see. Algorithms determine in part which messages, which people and which ideas social media users see.</p>
<p>On social media platforms, algorithms are mainly <a href="https://theconversation.com/facebook-whistleblower-frances-haugen-testified-that-the-companys-algorithms-are-dangerous-heres-how-they-can-manipulate-you-169420">designed to amplify information that sustains engagement</a>, meaning they keep people clicking on content and coming back to the platforms. I’m a <a href="https://www.kellogg.northwestern.edu/faculty/directory/brady_william.aspx">social psychologist</a>, and my colleagues and I have found evidence suggesting that a side effect of this design is that algorithms amplify information people are strongly biased to learn from. We call this information “PRIME,” for prestigious, in-group, moral and emotional information.</p>
<p>In our evolutionary past, biases to learn from PRIME information were very advantageous: <a href="https://doi.org/10.1016/S1090-5138(00)00071-4">Learning from prestigious individuals is efficient</a> because these people are successful and their behavior can be copied. Paying attention to people who violate moral norms is important because <a href="https://doi.org/10.1257/aer.90.4.980">sanctioning them helps the community maintain cooperation</a>.</p>
<p>But what happens when PRIME information becomes amplified by algorithms and some people exploit algorithm amplification to promote themselves? Prestige becomes a poor signal of success because people can fake prestige on social media. Newsfeeds become oversaturated with negative and moral information so that there is conflict rather than cooperation. </p>
<p>The interaction of human psychology and algorithm amplification leads to dysfunction because social learning supports cooperation and problem-solving, but social media algorithms are designed to increase engagement. We call this mismatch <a href="https://doi.org/10.1016/j.tics.2023.06.008">functional misalignment</a>.</p>
<h2>Why it matters</h2>
<p>One of the key outcomes of functional misalignment in algorithm-mediated social learning is that people start to form incorrect perceptions of their social world. For example, recent research suggests that when algorithms selectively amplify more extreme political views, people begin to <a href="https://doi.org/10.1038/s41562-023-01582-0">think that their political in-group and out-group are more sharply divided</a> than they really are. Such “false polarization” might be an <a href="https://doi.org/10.1016/j.cobeha.2020.07.005">important source of greater political conflict</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/WLfr7sU5W2E?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Social media algorithms amplify extreme political views.</span></figcaption>
</figure>
<p>Functional misalignment can also lead to greater spread of misinformation. A recent study suggests that people who are spreading <a href="https://doi.org/10.1037/tms0000136">political misinformation leverage moral and emotional information</a> – for example, posts that provoke moral outrage – in order to get people to share it more. When algorithms amplify moral and emotional information, misinformation gets included in the amplification.</p>
<h2>What other research is being done</h2>
<p>In general, research on this topic is in its infancy, but there are new studies emerging that examine key components of algorithm-mediated social learning. Some studies have demonstrated that <a href="https://arxiv.org/abs/2305.16941">social media algorithms clearly amplify PRIME information</a>.</p>
<p>Whether this amplification leads to offline polarization is hotly contested at the moment. A recent experiment found evidence that <a href="https://doi.org/10.1257/aer.20191777">Meta’s newsfeed increases polarization</a>, but another experiment that involved a collaboration with Meta <a href="https://doi.org/10.1126/science.abp9364">found no evidence of polarization increasing</a> due to exposure to their algorithmic Facebook newsfeed.</p>
<p>More research is needed to fully understand the outcomes that emerge when humans and algorithms interact in feedback loops of social learning. Social media companies have most of the needed data, and I believe that they should give academic researchers access to it while also balancing ethical concerns such as privacy.</p>
<h2>What’s next</h2>
<p>A key question is what can be done to make algorithms foster accurate human social learning rather than exploit social learning biases. My research team is working on new algorithm designs that increase engagement <a href="https://doi.org/10.1016/j.tics.2023.06.008">while also penalizing PRIME information</a>. We argue that this might maintain user activity that social media platforms seek, but also make people’s social perceptions more accurate.</p>
<p><em>The <a href="https://theconversation.com/us/topics/research-brief-83231">Research Brief</a> is a short take on interesting academic work.</em></p><img src="https://counter.theconversation.com/content/211172/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>William Brady does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Social media companies’ drive to keep you on their platforms clashes with how people evolved to learn from each other. One result is more conflict and misinformation.William Brady, Assistant Professor of Management and Organizations, Northwestern UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2105982023-08-14T12:25:46Z2023-08-14T12:25:46Z3 ways AI is transforming music<figure><img src="https://images.theconversation.com/files/542381/original/file-20230811-32504-6469wf.jpg?ixlib=rb-1.1.0&rect=0%2C42%2C9428%2C5250&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Musicians and producers can already utilize AI to realistically reproduce the sound of any instrument or voice imaginable.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/blue-musical-instrument-wall-royalty-free-image/1283143454?phrase=digital+musical+instruments&adppopup=true">Paul Campbell/iStock via Getty Images</a></span></figcaption></figure><p>Each fall, I begin my course <a href="https://et.iupui.edu/departments/mat/research/machine-musician-lab1/">on the intersection of music and artificial intelligence</a> by asking my students if they’re concerned about AI’s role in composing or producing music.</p>
<p>So far, the question has always elicited a resounding “yes.” </p>
<p>Their fears can be summed up in a sentence: AI will create a world where music is plentiful, but musicians get cast aside.</p>
<p>In the upcoming semester, I’m anticipating a discussion about Paul McCartney, who in June 2023 announced that he and a team of audio engineers had used machine learning to uncover a “lost” vocal track of John Lennon <a href="https://www.cnbc.com/2023/06/13/paul-mccartney-says-ai-got-john-lennons-voice-on-last-beatles-record.html">by separating the instruments from a demo recording</a>. </p>
<p>But resurrecting the voices of <a href="https://www.wired.com/2011/12/ueki-loid-speech-synthesizer/">long-dead artists</a> is just the tip of the iceberg in terms of what’s possible – and what’s already being done.</p>
<p><a href="https://www.theguardian.com/music/2023/jun/23/paul-mccartney-says-theres-nothing-artificial-in-new-beatles-song-made-using-ai">In an interview</a>, McCartney admitted that AI represents a “scary” but “exciting” future for music. To me, his mix of consternation and exhilaration is spot on. </p>
<p>Here are three ways AI is changing the way music gets made – each of which could threaten human musicians in various ways:</p>
<h2>1. Song composition</h2>
<p>Many programs can already generate music with a simple prompt from the user, such as “Electronic Dance with a Warehouse Groove.”</p>
<p><a href="https://www.frontiersin.org/articles/10.3389/frobt.2021.680586/full">Fully generative apps</a> train AI models on extensive databases of existing music. This enables them to learn musical structures, harmonies, melodies, rhythms, dynamics, timbres and form, and generate new content that stylistically matches the material in the database.</p>
<p>There are many examples of these kinds of apps. But the most successful ones, like <a href="https://boomy.com">Boomy</a>, allow nonmusicians to generate music and then post the AI-generated results on Spotify to earn money. <a href="https://www.foxbusiness.com/lifestyle/spotify-removes-ai-generated-songs-platform">Spotify recently removed many of these Boomy-generated tracks</a>, claiming that this would protect human artists’ rights and royalties.</p>
<p>The two companies quickly came to an agreement that allowed Boomy to re-upload the tracks. But the algorithms powering these apps still have a <a href="https://scholarship.law.edu/cgi/viewcontent.cgi?article=1108&context=jlt">troubling ability to infringe upon existing copyright</a>, which might go unnoticed to most users. After all, basing new music on a data set of existing music is bound to cause noticeable similarities between the music in the data set and the generated content. </p>
<figure class="align-center ">
<img alt="Yellow and pink poster attached to a lamp post that reads 'artificial intelligence plus human stupidity equals bangers.'" src="https://images.theconversation.com/files/542358/original/file-20230811-17-o479w3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/542358/original/file-20230811-17-o479w3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/542358/original/file-20230811-17-o479w3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/542358/original/file-20230811-17-o479w3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/542358/original/file-20230811-17-o479w3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/542358/original/file-20230811-17-o479w3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/542358/original/file-20230811-17-o479w3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A poster for the AI music service Boomy in Austin, Texas.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/poster-for-the-ai-music-creation-service-boomy-austin-texas-news-photo/1475137303?adppopup=true">Smith Collection/Gado/Getty Images</a></span>
</figcaption>
</figure>
<p>Furthermore, streaming services like Spotify and <a href="https://music.amazon.com/">Amazon Music</a> are naturally incentivized to develop their own <a href="https://www.musicbusinessworldwide.com/amazon-music-strikes-playlist-partnership-with-generative-ai-music-company-endel12/">AI music-generation technology</a>. Spotify, for instance, <a href="https://dittomusic.com/en/blog/how-much-does-spotify-pay-per-stream/#:%7E:text=Spotify%20pays%20artists%20between%20%240.003,holders%20and%2030%25%20to%20Spotify.">pays 70% of the revenue of each stream</a> to the artist who created it. If the company could generate that music with its own algorithms, it could cut human artists out of the equation altogether.</p>
<p>Over time, this could mean more money for giant streaming services, less money for musicians – and a less human approach to making music.</p>
<h2>2. Mixing and mastering</h2>
<p>Machine-learning-enabled apps that help musicians balance all of the instruments and clean up the audio in a song – what’s known as mixing and mastering – are valuable tools for those who lack the experience, skill or resources to pull off professional-sounding tracks. </p>
<p>Over the past decade, AI’s integration into music production has revolutionized how music is mixed and mastered. AI-driven apps like <a href="https://www.landr.com">Landr</a>, <a href="https://cryo-mix.com">Cryo Mix</a> and <a href="https://www.izotope.com">iZotope’s Neutron</a> can automatically analyze tracks, balance audio levels and remove noise. </p>
<p>These technologies streamline the production process, allowing musicians and producers to focus on the creative aspects of their work and leave some of the technical drudgery to AI. </p>
<p>While these apps undoubtedly take some work away from professional mixers and producers, they also allow professionals to quickly complete less lucrative jobs, <a href="https://mackie.com/en/blog/all/8_Ways_Earn_Money_Music_Production.html">such as mixing or mastering for a local band</a>, and focus on high-paying commissions that require more finesse. These apps also allow musicians to produce more professional-sounding work without involving an audio engineer they can’t afford. </p>
<h2>3. Instrumental and vocal reproduction</h2>
<p>Using “tone transfer” algorithms <a href="https://mawf.io">via apps like Mawf</a>, musicians can transform the sound of one instrument into another. </p>
<p>Thai musician and engineer <a href="https://yaboihanoi.com">Yaboi Hanoi’s</a> song “<a href="https://youtu.be/n2bj5R5o9mE">Enter Demons & Gods</a>,” which won the third international <a href="https://youtu.be/1VH-0EAXutU">AI Song Contest</a> in 2022, was unique in that it was influenced not only by Thai mythology, but also by the sounds of native Thai musical instruments, which have a non-Western system of intonation. One of the most technically exciting aspects of Yaboi Hanoi’s entry was the reproduction of a traditional Thai woodwind instrument – <a href="https://www.metmuseum.org/art/collection/search/501870">the pi nai</a> – <a href="https://youtu.be/PbrRoR3nEVw">which was resynthesized</a> to perform the track.</p>
<p>A variant of this technology lies at the core of the <a href="https://www.vocaloid.com">Vocaloid voice synthesis software</a>, which allows users to produce convincingly human vocal tracks with swappable voices. </p>
<p><a href="https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/">Unsavory applications of this technique</a> are popping up outside of the musical realm. For example, AI voice swapping has been used to scam people out of money. </p>
<p>But musicians and producers can already use it to realistically reproduce the sound of any instrument or voice imaginable. The downside, of course, is that this technology can rob instrumentalists of the opportunity to perform on a recorded track.</p>
<p><audio preload="metadata" controls="controls" data-duration="14" data-image="" data-title="Using tone transfer, a singer's voice is turned into the sound of a trumpet." data-size="296160" data-source="Jason Palamara" data-source-url="" data-license="CC BY" data-license-url="http://creativecommons.org/licenses/by/4.0/">
<source src="https://cdn.theconversation.com/audio/2861/tone-transfer-vocal-to-trumpet.mp3" type="audio/mpeg">
</audio>
<div class="audio-player-caption">
Using tone transfer, a singer’s voice is turned into the sound of a trumpet.
<span class="attribution"><span class="source">Jason Palamara</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a><span class="download"><span>289 KB</span> <a target="_blank" href="https://cdn.theconversation.com/audio/2861/tone-transfer-vocal-to-trumpet.mp3">(download)</a></span></span>
</div></p>
<h2>AI’s Wild West moment</h2>
<p>While I applaud Yaboi Hanoi’s victory, I have to wonder if it will encourage musicians to use AI to fake a cultural connection where none exists.</p>
<p>In 2021, Capitol Music Group made headlines by signing an “AI rapper” that had been given the avatar of a Black male cyborg, but which was really the work of Factory New non-Black software engineers. The backlash was swift, with the record label roundly excoriated <a href="https://www.bbc.com/news/newsbeat-62659741">for blatant cultural appropriation</a>. </p>
<p>But AI musical cultural appropriation is easier to stumble into than you might think. With the extraordinary size of songs and samples that comprise the data sets used by apps like Boomy – see the open source “Million Song Dataset” <a href="http://millionsongdataset.com">for a sense of the scale</a> – there’s a good chance that a user may unwittingly upload a newly generated track that pulls from a culture that isn’t their own, or cribs from an artist in a way that too closely mimics the original. Worse still, it won’t always be clear who is to blame for the offense, and current U.S. copyright laws are contradictory and woefully inadequate to the task of regulating these issues.</p>
<p>These are all topics that have come up in my own class, which has allowed me to at least inform my students of the dangers of unchecked AI and how to best avoid these pitfalls. </p>
<p>At the same time, at the end of each fall semester, I’ll again ask my students if they’re concerned about an AI takeover of music. At that point, and with a whole semester’s experience investigating these technologies, most of them say they’re excited to see how the technology will evolve and where the field will go. </p>
<p>Some dark possibilities do lie ahead for humanity and AI. Still, at least in the realm of musical AI, there is cause for some optimism – assuming the pitfalls are avoided.</p><img src="https://counter.theconversation.com/content/210598/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jason Palamara does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>AI can streamline the painstaking work of mixing and editing tracks. But it’s also easy to see how AI-generated music will make more money for giant streaming services at the expense of artists.Jason Palamara, Assistant Professor of Music Technology, Indiana UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2093092023-08-10T12:45:59Z2023-08-10T12:45:59ZHeritage algorithms combine the rigors of science with the infinite possibilities of art and design<figure><img src="https://images.theconversation.com/files/541961/original/file-20230809-29902-o57gog.png?ixlib=rb-1.1.0&rect=53%2C0%2C7168%2C4088&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Artist AbdulAlim U-K (Aikin Karr) combines the fractal structure of traditional African architecture with emerging technologies in computer graphics.</span> <span class="attribution"><a class="source" href="https://www.instagram.com/p/Cge-WOAsrkz/?img_index=2">AbdulAlim U-K</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>The model of democracy in the 1920s is sometimes called “<a href="https://www.populismstudies.org/Vocabulary/melting-pot/">the melting pot</a>” – the dissolution of different cultures into an American soup. An update for the 2020s might be “open source,” where cultural mixing, sharing and collaborating can build bridges between people rather than create divides.</p>
<p>Our research on <a href="https://doi.org/10.5209/rev_TEKN.2016.v13.n2.52843">heritage algorithms</a> aims to build such a bridge. We develop <a href="https://csdt.org">digital tools</a> to teach students about the complex mathematical sequences and patterns present in different cultures’ artistic, architectural and design practices.</p>
<p>By combining computational thinking and cultural creative practices, our work provides an entry point for students who are disproportionately left out of STEM careers, whether by race, class or gender. Even those who feel at home with equations and abstraction can benefit from narrowing the gap between the arts and sciences.</p>
<h2>What are heritage algorithms?</h2>
<p>Traditional STEM curricula often present science as a ladder you climb. For example, you might be told that math starts with counting, then goes to algebra, then calculus and so on. </p>
<p>But our research has found that the global history of science is more like a bush: Each culture has its own branching set of discoveries. Some of these discoveries offer a perspective that’s different from the theorem-proof approach for math or hypothesis-experiment approach for biology. Understanding the rules and techniques that create cultural patterns from the maker’s point of view can help bridge the gap between knowledge branches. We refer to these hybrids of computation and culture as <a href="https://doi.org/10.5209/rev_TEKN.2016.v13.n2.52843">heritage algorithms</a>, and there are examples everywhere. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/537365/original/file-20230713-17-2yr2er.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Two photos. On the left, one man in a hat is sitting holding a book, and another person crouches next to him pointing at the page. On the right, two people stand above a table and the person on the right is stamping a blank page." src="https://images.theconversation.com/files/537365/original/file-20230713-17-2yr2er.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/537365/original/file-20230713-17-2yr2er.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=225&fit=crop&dpr=1 600w, https://images.theconversation.com/files/537365/original/file-20230713-17-2yr2er.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=225&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/537365/original/file-20230713-17-2yr2er.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=225&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/537365/original/file-20230713-17-2yr2er.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=283&fit=crop&dpr=1 754w, https://images.theconversation.com/files/537365/original/file-20230713-17-2yr2er.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=283&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/537365/original/file-20230713-17-2yr2er.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=283&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The authors learn from artisans. Left: Ron Eglash discusses fractal patterns with an Ethiopian crafter. Right: Audrey Bennett tries her hand at Adinkra stamping in Ghana.</span>
<span class="attribution"><span class="source">Ron Eglash</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Flying over an African village, you can see the recursive geometry of <a href="https://www.rutgersuniversitypress.org/african-fractals/9780813526140">African fractals</a> in their architecture: circles of circles, rectangles within rectangles, and other “self-similar” structures. These fractal patterns also appear in their textiles, carvings, paintings, ironwork and more.</p>
<p>Other kinds of <a href="https://doi.org/10.1007/s11423-019-09728-6">algorithms underlie</a> the repeating sequences of bent wood arcs that make up Native American wigwams, canoes and cradles. Even <a href="https://csdt.org/culture/henna/index.html">henna tattoos</a> demonstrate the interactions among computation, nature and culture.</p>
<p>These heritage algorithms challenge the <a href="https://www.routledge.com/The-Reinvention-of-Primitive-Society-Transformations-of-a-Myth/Kuper/p/book/9781138282650">myth of “primitive cultures”</a> – the idea that early Africans had no math past counting on fingers or that Native American agriculture lacked sophistication.</p>
<p>The computational thinking that is embedded in Indigenous artifacts and other creative practices, such as weaving, beadwork and quilting, is not merely decorative. It also reflects different ways of <a href="https://doi.org/10.1007/978-3-031-31293-9_18">thinking about the world</a>. Our interviews with artisans revealed how they visualize <a href="https://doi.org/10.1525/aa.1997.99.1.112">spiritual concepts</a> in formal techniques and numerical sequences. </p>
<h2>Bringing heritage algorithms to the classroom</h2>
<p>Heritage algorithms give students a way to blend the abstract rigors of math, the grounded legacies of culture and the infinite possibilities of art. To bring these algorithms to the classroom, <a href="https://csdt.org">we have created</a> interactive computer programs and simulations that we call <a href="https://www.jstor.org/stable/3804796">culturally situated design tools</a>, or CSDTs.</p>
<p>Each CSDT was created in collaboration with Indigenous elders, street artists, traditional crafters and others. With the creators’ permission, we transfer their knowledge of pattern creation into digital tools that students enjoy using and teachers enjoy implementing in their lesson plans.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/540603/original/file-20230801-29684-6okmwr.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A close up of a brown and white woven fabric" src="https://images.theconversation.com/files/540603/original/file-20230801-29684-6okmwr.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/540603/original/file-20230801-29684-6okmwr.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=290&fit=crop&dpr=1 600w, https://images.theconversation.com/files/540603/original/file-20230801-29684-6okmwr.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=290&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/540603/original/file-20230801-29684-6okmwr.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=290&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/540603/original/file-20230801-29684-6okmwr.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=364&fit=crop&dpr=1 754w, https://images.theconversation.com/files/540603/original/file-20230801-29684-6okmwr.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=364&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/540603/original/file-20230801-29684-6okmwr.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=364&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">In a woven Navajo blanket, the line y=x forms a 30-degree angle with the horizontal axis.</span>
<span class="attribution"><span class="source">Ron Eglash</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>It’s important to craft each CSDT to reflect the way those artisans think about the cultural practice. For instance, the slope of the line y=x, mathematically calculated as “rise over run,” is 1 – for every unit you move up the line, you move a unit to the right. This line forms a 45-degree angle with the x-axis. But when Navajo weavers use this “up one, over one” pattern, the slope is closer to a 30-degree angle. This is because they weave yarn horizontally through vertical cords that are thicker than the yarn. So we made sure to preserve this feature in the weaving simulation we built.</p>
<p>A crucial aspect of CSDTs is that students may use them to follow their interests. This freedom and independence lets students encounter new cultures, delve deeper into their own identity or mix designs from different cultures to create something completely new. </p>
<p>We have seen Black students choose an <a href="https://csdt.org/culture/quilting/appalachian.html">Appalachian quilting simulation</a>, Native American students choose <a href="https://csdt.org/culture/cornrowcurves/index.html">cornrow simulations</a> and white students create <a href="https://csdt.org/culture/beadloom/index.html">beadwork simulations</a>. Students’ creative designs often mix many cultures together – cornrows become “<a href="https://csdt.org/news/powwow/">powwow braids</a>,” and African fractal simulations turn into plants, lungs and river deltas.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/537364/original/file-20230713-27-a3kuf7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A collage of several images, some depicting students holding up a quilt, another of a student working on the quilt, and another of a computer program featuring the quilt design" src="https://images.theconversation.com/files/537364/original/file-20230713-27-a3kuf7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/537364/original/file-20230713-27-a3kuf7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=414&fit=crop&dpr=1 600w, https://images.theconversation.com/files/537364/original/file-20230713-27-a3kuf7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=414&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/537364/original/file-20230713-27-a3kuf7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=414&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/537364/original/file-20230713-27-a3kuf7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=521&fit=crop&dpr=1 754w, https://images.theconversation.com/files/537364/original/file-20230713-27-a3kuf7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=521&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/537364/original/file-20230713-27-a3kuf7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=521&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Students from Harlem Academy create designs using the Appalachian and Lakota quilt CSDTs. Many Appalachian quilts contained the ‘radical rose,’ symbolizing support for abolition.</span>
<span class="attribution"><span class="source">Ron Eglash</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Heritage algorithms and CSDTs provide a powerful starting place for students to improve their <a href="https://doi.org/10.1353/cye.2009.0024">computing skills and confidence</a>. These tools even provide a foundation for a variety of careers, from <a href="https://blog.ted.com/architecture-infused-with-fractals-ron-eglash-and-xavier-vilalta/">architecture</a> to <a href="https://csdt.org/culture/anishinaabearcs/2017overview.html">environmental engineering</a>.</p>
<h2>When computation and culture collide</h2>
<p>The reach of heritage algorithms has recently extended beyond learning environments to contemporary art spaces. Artists are generating a bold new creative style using “ethnocomputing” – an understanding of computer science from a cultural perspective.</p>
<p>You can see fresh interpretations of heritage algorithms in the African fractals embedded in the work of visual artist <a href="https://www.artforum.com/print/reviews/202007/tendai-mupita-83726">Tendai Mupita</a>, the cornrow simulations integrated in the work of <a href="https://www.nytimes.com/2022/02/24/arts/rashaad-newsome-assembly-exhibit.html">Rashaad Newsome</a>, the blending of the African diaspora and technology by <a href="https://nettricegaskins.medium.com/afrofuturist-software-from-conception-to-manifestation-d05389d0874">Nettrice Gaskins</a> and the creative duo <a href="https://iconeye.com/?p=44925">Tosin Oshinowo and Chrissy Amuah</a>.</p>
<p><a href="https://www.hauserwirth.com/hauser-wirth-exhibitions/35571-the-new-bend/#about">An exhibition</a> on display <a href="https://static1.squarespace.com/static/5dc84bade8c8347aab560645/t/647f625bf3739e5c9f84d163/1686069851882/Press-Release_TheNewBend_HWNY22-1-1.pdf">in New York City</a>, <a href="https://vip-hauserwirth.com/the-new-bend-somerset/">the U.K.</a> <a href="https://vip-hauserwirth.com/the-new-bend-los-angeles/">and Los Angeles</a> explores the textile techniques of artists inspired by the African American <a href="https://www.arts.gov/stories/blog/2015/quilts-gees-bend-slideshow">quilting tradition of Gee’s Bend, Alabama</a>. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/537363/original/file-20230713-21522-dovvzg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A dark-skinned girl wearing glasses sits in front of a computer screen. Conrow patterns are visible on the screen behind her, and imposed on the right side of the image." src="https://images.theconversation.com/files/537363/original/file-20230713-21522-dovvzg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/537363/original/file-20230713-21522-dovvzg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=331&fit=crop&dpr=1 600w, https://images.theconversation.com/files/537363/original/file-20230713-21522-dovvzg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=331&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/537363/original/file-20230713-21522-dovvzg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=331&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/537363/original/file-20230713-21522-dovvzg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=416&fit=crop&dpr=1 754w, https://images.theconversation.com/files/537363/original/file-20230713-21522-dovvzg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=416&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/537363/original/file-20230713-21522-dovvzg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=416&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A high school student uses a CSDT to simulate cornrow hairstyle patterns.</span>
<span class="attribution"><span class="source">Ron Eglash</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Our research on heritage algorithms is partially driven by a philosophical desire to reframe STEM as a source of <a href="https://nmaahc.si.edu/explore/stories/black-joy-resistance-resilience-and-reclamation">radical joy</a> for every ethnicity and identity. Inspired by the radical feminist phrase “sex-positive feminism,” we sometimes call our perspective “<a href="https://www.researchgate.net/publication/340418728_Race-positive_Design_A_Generative_Approach_to_Decolonizing_Computing">race-positive design</a>” – thinking of race not in purely negative terms of oppression but instead as a rich source of creativity, liberation and a <a href="https://doi.org/10.1007/s11528-022-00815-9">free-thinking mindset</a> for curiosity and scientific inquiry.</p>
<p>This philosophical stance also has <a href="https://csdt.org/publications/">a practical side</a>: <a href="https://www.researchgate.net/publication/314263728_From_Sports_to_Science_Using_Basketball_Analytics_to_Broaden_the_Appeal_of_Math_and_Science_Among_Youth">statistically significant</a> <a href="https://doi.org/10.17583/remie.2015.1399">improvement</a> <a href="https://doi.org/10.1145/2037276.2037281">in STEM scores</a> <a href="https://doi.org/10.1525/aa.2006.108.2.347">for underrepresented students</a>. Many teachers have recognized the potential of heritage algorithms for getting students invested in STEM. One teacher using the <a href="https://csdt.org/culture/graffiti/index.html">graffiti tool</a> told us this was the first time students asked if they could stay in her math class after school. Another said she would never teach negative numbers again without the <a href="https://csdt.org/culture/beadloom/index.html">bead loom CSDT</a>.</p>
<p>Heritage algorithms, both in the classroom and beyond, open up a two-way bridge between humanistic and technical knowledge. They offer a space where everyone – teacher and student, young and old, geek and artist – can learn, share and collaborate.</p><img src="https://counter.theconversation.com/content/209309/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Audrey G. Bennett receives funding from the NEH and NSF. </span></em></p><p class="fine-print"><em><span>Ron Eglash receives funding from the NSF.</span></em></p>By bridging culture and computation, heritage algorithms challenge the myth of ‘primitive cultures’ and forge a new understanding of science and art.Audrey G. Bennett, University Diversity and Social Transformation Professor, Stamps School of Art & Design, University of MichiganRon Eglash, Professor of Information, University of MichiganLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2093252023-08-04T12:28:12Z2023-08-04T12:28:12ZTaylor Swift’s Eras Tour is a potent reminder that the internet is not real life<figure><img src="https://images.theconversation.com/files/540883/original/file-20230802-19-bmnrpl.jpg?ixlib=rb-1.1.0&rect=604%2C1233%2C4162%2C2500&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Swift performs at Gillette Stadium on May 19, 2023, in Foxborough, Mass., during her Eras Tour.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/taylor-swift-performs-onstage-during-taylor-swift-the-news-photo/1491637582?adppopup=true">Scott Eisen/TAS23 via Getty Images</a></span></figcaption></figure><p>In the weeks leading up to June 16, 2023, when I attended the Pittsburgh leg of <a href="https://www.yahoo.com/entertainment/taylor-swift-gives-bonuses-totaling-215418698.html">Taylor Swift’s Eras Tour</a>, the online chatter about the 33-year-old singer had become draining. </p>
<p>The internet was ablaze with rumors about <a href="https://theconversation.com/rooting-for-the-anti-hero-how-fans-turned-taylor-swifts-short-relationship-with-matty-healy-into-a-political-statement-207108">Swift dating Matty Healy</a>, the lead singer of the English pop-rock band The 1975. Some Swifties – the term used for diehard Taylor Swift fans – berated the pop superstar for dating Healy, who’d become mired in controversy for appearing on a podcast whose hosts <a href="https://www.rollingstone.com/music/music-news/the-1975-matty-healy-ice-spice-apology-1234721163/">made racist comments about the rapper Ice Spice</a>. </p>
<p>As the Pittsburgh leg of the tour approached, I wondered if I were about to dive headfirst into an angry mob of tens of thousands of Swifties. </p>
<p>On the day of the show, Acrisure Stadium was mobbed with 72,000 people, but the Swifties in attendance were far from angry. </p>
<p>In that moment we became deeply connected by our shared love and admiration for Swift’s music. Sociologist Emile Durkheim described this phenomenon as “<a href="https://doi.org/10.15195/v6.a2">collective effervescence</a>,” the unique surge in feeling when large groups of people come together for a shared purpose. </p>
<p>“It was rare, I was there, I was there,” Swift belted out during “<a href="https://www.youtube.com/watch?v=9OQBDdNHmXo">All Too Well</a>.” </p>
<p>I was there, too, as life events touched by Swift flashed by: sitting at my first desktop computer as a teenager in Kathmandu, Nepal, replaying “Love Story” on LimeWire; my first week in the U.S., during the 2009 MTV Video Music Awards, when <a href="https://www.vox.com/culture/2019/8/26/20828559/taylor-swift-kanye-west-2009-mtv-vmas-explained">Kanye West infamously interrupted Swift</a>; how Swift’s eighth studio album, “<a href="https://www.nytimes.com/2020/07/26/arts/music/taylor-swift-folklore-review.html">Folklore</a>,” brought me back to life after it seemed as if the world were on the verge of imploding in 2020. </p>
<h2>Collective delusion</h2>
<p>The Eras Tour was not my first experience of collective effervescence. Nor was it the first time I felt such a strong disconnect between the online and offline worlds. </p>
<p>Right before the pandemic began, there was the painfully quiet fizzling out of the <a href="https://www.nytimes.com/2020/04/08/us/politics/bernie-sanders-drops-out.html">Bernie 2020 movement</a>. As a volunteer for that campaign, I had the remarkable experience of connecting with other Americans who wanted a Bernie Sanders presidency. </p>
<p>I especially appreciated how this role connected me to the people who make up the Nepali diaspora in the U.S. We hoped to improve our immigrant experiences, whether it involved no longer fearing the deportation of loved ones <a href="https://berniesanders.com/issues/">or easier access to health care</a>.</p>
<p>But then repeated news cycles about “<a href="https://www.latimes.com/politics/story/2020-02-19/bernie-sanders-supporters-toxic-online-culture">toxic Bernie Bros</a>” seemed to drain the movement’s momentum. Mainstream media outlets reported that Sanders’ base was <a href="https://www.bostonglobe.com/2020/03/04/metro/intractable-bernie-bros-what-they-might-mean-sanders-campaign/">made up of white male cyberbullies</a>. Negative tweets had been amplified, and the words and behaviors of a few Sanders supporters all of a sudden were being portrayed as representative of an entire movement.</p>
<p>The contrast between what was being said online versus my own experiences was jarring: Here I was working to find transportation for 80-year-old Nepali grandmas who didn’t speak English but wanted to vote for Sanders. </p>
<p>Post-election analysis would show that the Bernie Bro <a href="https://www.msnbc.com/opinion/myth-white-bernie-bro-has-quietly-vanished-n1276377">trope was entirely constructed</a>; there was no evidence to show that young white men made up a majority of Sanders’ supporters. The movement, in fact, consisted of a diverse <a href="https://www.washingtonpost.com/politics/bernie-sanders-powered-by-diverse-liberal-coalition-forces-a-reckoning-for-democrats/2020/02/23/d6a15766-5641-11ea-9000-f3cffee23036_story.html">coalition of people from marginalized races and genders</a>.</p>
<figure class="align-center ">
<img alt="Women clap and hold blue 'Bernie' signs." src="https://images.theconversation.com/files/540887/original/file-20230802-8013-x6v564.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/540887/original/file-20230802-8013-x6v564.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/540887/original/file-20230802-8013-x6v564.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/540887/original/file-20230802-8013-x6v564.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/540887/original/file-20230802-8013-x6v564.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/540887/original/file-20230802-8013-x6v564.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/540887/original/file-20230802-8013-x6v564.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Supporters of Sen. Bernie Sanders cheer during a Get Out to Caucus Rally in Las Vegas, Nev., on Feb. 21, 2020.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/supporters-hold-bernie-placards-as-democratic-presidential-news-photo/1202571834?adppopup=true">Frederic J. Brown/AFP via Getty Images</a></span>
</figcaption>
</figure>
<h2>A vocal minority sets the agenda</h2>
<p>Online narratives distort real life more often than you might realize. </p>
<p>Research consistently shows that a small minority of people who have social media accounts post the vast majority of content. </p>
<p>In what’s termed the “<a href="https://www.nngroup.com/articles/participation-inequality/">90-9-1 rule</a>,” 90% of users on these websites only “lurk” or read content, 9% of the users reply or re-post with occasional new contributions, and only 1% of the users frequently create new content. </p>
<p>Pioneered by Jakob Neilson, the 90-9-1 rule is <a href="https://doi.org/10.1016/j.invent.2014.09.003">one of many theories</a> within internet studies that describe participation rates, and different scholars find support for different variations of this rule. Reddit, for example, has <a href="https://www.statista.com/statistics/443332/reddit-monthly-visitors/">over 1 billion</a> monthly active users, but according to a 2017 conference paper, <a href="https://www.researchgate.net/publication/321063802_Predicting_User-Interactions_on_Reddit">an overwhelming majority of Reddit users are lurkers</a>. X, the website and app formerly known as Twitter, had <a href="https://www.bankmycell.com/blog/how-many-users-does-twitter-have">around 350 million</a> users as of 2023; however, research from 2019 found that 75% of its users <a href="https://doi.org/10.1145/3308560.3316705">were lurkers</a>.</p>
<p>In other words, most of the discussions happening on websites like Reddit and Twitter come from a vocal minority of users – <a href="https://doi.org/10.31234/osf.io/n5d9j">whose posts are then curated and boosted by algorithms</a>.</p>
<p>Nonetheless, in the past decade, the news media have increasingly constructed narratives about collective reality based on what happens in these websites. </p>
<p>Of course, toxic online behavior exists in all online communities. But it represents the words of a smaller minority of users within the already small minority of people who post content online. Media narratives that emphasize certain groups as toxic based on online behavior – whether they are describing fandom or politics – fall into the trap of confusing the internet with real life.</p>
<p>In the weeks when Swift was dating Healy, a vocal minority of Swifties came head-to-head with <a href="https://whatstrending.com/hosts-of-the-adam-friedland-show-explain-matty-healy-comments-after-they-resurfaced-online/">a vocal minority of Healy’s defenders</a>. Then the celebrity pair ended their relationship, and collective attention moved on from that topic almost immediately. </p>
<p>Several weeks of nonstop debate, attacks and hand-wringing ended up being utterly meaningless – except to social media companies that converted this brief obsession into clicks, engagement and ad revenue.</p>
<p>My forthcoming book, “<a href="https://www.davidson.edu/people/aarushi-bhandari">Attention and Alienation</a>,” brings renewed focus to an increasingly demystified phenomenon: The online <a href="https://www.doi.org/10.5195/JWSR.2023.1100">attention economy</a> maximizes profits by designing <a href="https://doi.org/10.1017/beq.2020.32">algorithms that boost engagement</a>, particularly by promoting negativity and outrage.</p>
<h2>Oligarchy of the ‘extremely online’</h2>
<p>Sometimes the consequences of mistaking the internet for real life are dire.</p>
<p>Take reproductive health. Online rage about <a href="https://www.npr.org/2022/06/24/1102305878/supreme-court-abortion-roe-v-wade-decision-overturn">the Supreme Court’s decisions to overturn Roe. v. Wade</a> <a href="https://trends.google.com/trends/explore?date=today%203-m&geo=US&q=roe%20v%20wade&hl=en-US">peaked within a few days</a> and people moved on to different topics. </p>
<p>Today, reports about reproductive health care take up <a href="https://news.google.com/search?q=roe%20v%20wade&hl=en-US&gl=US&ceid=US%3Aen">very little news media space</a> compared with garden-variety trending topics <a href="https://news.google.com/search?for=barbenheimer&hl=en-US&gl=US&ceid=US%3Aen">like “Barbenheimer”</a> – the double blockbuster release of the movies “Barbie” and “Oppenheimer” on July 21, 2023.</p>
<p>In the real world, many people continue to suffer from lack of access to lifesaving reproductive health care <a href="https://reproductiverights.org/maps/abortion-laws-by-state/">across the U.S.</a>, while the online chattering class celebrates the <a href="https://www.theguardian.com/film/2023/jul/23/barbie-review-greta-gerwig-margot-robbie-ryan-riotous-candy-coloured-feminist-fable">radical feminism of the “Barbie” movie</a>. </p>
<p>Perhaps it’s time to sideline social media and the internet when evaluating the nature of our collective reality. Reality exists outside of our devices, whereas social media algorithms push whatever keeps us tethered to the screen. There is little evidence to support the idea that <a href="https://doi.org/10.1111/cccr.12097">online discourse represents collective experiences</a>.</p>
<p>That might be easier said than done: <a href="https://www.pewresearch.org/short-reads/2022/06/27/twitter-is-the-go-to-social-media-site-for-u-s-journalists-but-not-for-the-public/">94% of journalists say they</a> use social media for their jobs.</p>
<p>But as an internet researcher – and Taylor Swift fan – I am hopeful that experiences like the Eras Tour will wake up more people to the fact that human beings are more united than social media algorithms would have us believe.</p><img src="https://counter.theconversation.com/content/209325/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>I was a volunteer for the Bernie 2020 campaign. </span></em></p>Media outlets increasingly construct narratives about collective reality based on what’s happening on social media.Aarushi Bhandari, Assistant Professor of Sociology, Davidson CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2005942023-07-26T10:57:54Z2023-07-26T10:57:54ZA ‘black box’ AI system has been influencing criminal justice decisions for over two decades – it’s time to open it up<figure><img src="https://images.theconversation.com/files/535859/original/file-20230705-7822-ejdft6.jpg?ixlib=rb-1.1.0&rect=606%2C0%2C3253%2C2923&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/3d-dissolving-human-head-made-cube-2256853311">Shutterstock/Orla</a></span></figcaption></figure><p>Justice systems around the world are using artificial intelligence (AI) to assess people with criminal convictions. These AI technologies rely on machine learning algorithms and their key purpose is to predict the risk of reoffending. They influence decisions made by the courts and prisons and by parole and probation officers.</p>
<p>This kind of tech has been an intrinsic part of the UK justice system since 2001. That was the year a risk assessment tool, known as Oasys (Offender Assessment System), was introduced and began taking over certain tasks from probation officers.</p>
<p>Yet in over two decades, scientists outside the government have not been permitted access to the data behind Oasys to independently analyse its workings and assess its accuracy – for example, whether the decisions it influences lead to fewer offences or reconvictions. </p>
<hr>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><strong><em>This article is part of Conversation Insights</em></strong>
<br><em>The Insights team generates <a href="https://theconversation.com/uk/topics/insights-series-71218">long-form journalism</a> derived from interdisciplinary research. The team is working with academics from different backgrounds who have been engaged in projects aimed at tackling societal and scientific challenges.</em></p>
<hr>
<p>Lack of transparency <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4155549">affects AI systems generally</a>. Their complex decision-making processes can evolve into a black box – too obscure to unravel without advanced technical knowledge.</p>
<p>Proponents believe that AI algorithms are more objective scientific tools because they are standardised and this helps to reduce human bias in assessments and decision making. This, supporters claim, makes them useful for public protection.</p>
<p>But critics say that <a href="https://journals.sagepub.com/doi/abs/10.1177/1362480618763582">a lack of access to the data</a>, as well as other crucial information required for independent evaluation, raises serious questions of accountability and transparency.</p>
<p>It also calls into question what kinds of biases exist in a system that uses data from criminal justice institutions, like the police, which research has repeatedly shown is <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/643001/lammy-review-final-report.pdf">skewed against ethnic minorities</a>. </p>
<p>However, according to the Ministry of Justice, external evaluation <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/815078/Oasys-needs-adhoc-stats.pdf">poses data protection implications</a> because it would require access to personal data, including <a href="https://www.equalityhumanrights.com/en/equality-act/protected-characteristics">protected characteristics</a> such as race, ethnicity and gender (it is against the law to discriminate against someone because of a protected characteristic). </p>
<h2>Oasys introduced</h2>
<p>When Oasys <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/449357/research-analysis-offender-assessment-system.pdf">was introduced in the UK in 2001</a> it brought with it sweeping changes to how courts and probation services assessed people convicted of crimes.</p>
<p>It meant that algorithms would begin having a huge influence in deciding just how much of a “risk” people involved in the justice system posed to society. These people include those convicted of a crime and awaiting punishment, prisoners and parole applicants. </p>
<p>Before Oasys, a probation officer would interview a defendant to try to get to the bottom of their offending and assess whether they were sorry, regretful or potentially dangerous. But post 2001 this traditional client-based casework approach was cut back and the onus was increasingly put on algorithmic predictions. </p>
<p>These machine learning predictions <a href="https://prisonreformtrust.org.uk/adviceguide/offender-management-and-sentence-planning/">inform a host of decisions</a>, such as: granting bail, outcomes of immigration cases, the kinds of sentences people face (community-based, custodial or suspended), prison security classifications and assignments to rehabilitation programmes. They also help decide the conditions on how people convicted of crimes are supervised in the community and whether or not they can be released early from prison.</p>
<figure class="align-center ">
<img alt="A jail cell." src="https://images.theconversation.com/files/536872/original/file-20230711-15-jek6e2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/536872/original/file-20230711-15-jek6e2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/536872/original/file-20230711-15-jek6e2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/536872/original/file-20230711-15-jek6e2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/536872/original/file-20230711-15-jek6e2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/536872/original/file-20230711-15-jek6e2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/536872/original/file-20230711-15-jek6e2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A jail cell, but is justice being done?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/prison-jail-cell-529712425">Shutterstock/photocritical</a></span>
</figcaption>
</figure>
<p>Some attempts at more rigorous risk assessments predate Oasys. The Parole Board in England and Wales deployed a re-conviction <a href="https://core.ac.uk/download/pdf/42615026.pdf">prediction score in 1976</a> which estimated the probability of a reconviction within a fixed period of two years on release from prison. Then, in the mid-1980s, a staff member with the Cambridgeshire Probation Service developed a rather simple risk prediction scale to provide more objectivity and consistency about predicting whether probation was an appropriate alternative to a custodial sentence. Both these methods were rather crude in terms of using only a handful of predictors and deploying rather informal statistical methods.</p>
<h2>Harnessing computer power</h2>
<p>Around this time, Home Office officials noticed the increased interest in the UK and the US authorities for developing predictive algorithms that could harness the efficiencies computers offered. These algorithms would support human opinions with scientific evidence about what factors were predictive of reoffending. The idea was to use scarce resources more effectively while protecting the public from people categorised as being at high risk of reoffending and causing serious harm.</p>
<p>The Home Office commissioned its first statistical predictive tool, <a href="https://www.sccjr.ac.uk/wp-content/uploads/2009/01/Research_and_Practice_in_Risk_Assessment_and_Risk_Management.pdf">which was deployed in 1996</a> across probation offices in England and Wales. This initial risk tool was called the Offender Group Reconviction Scale (OGRS). The OGRS is an actuarial tool in that it uses statistical methods to assess information about a person’s past (such as criminal history) to predict the risk of any type of reoffending. </p>
<p>The OGRS is still in use today after several revisions. And this simple algorithm has become incorporated into Oasys which has grown to incorporate <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/449357/research-analysis-offender-assessment-system.pdf">additional machine learning algorithms</a>. These have developed over time, predicting different types of reoffending. Reoffending is measured as reconviction <a href="https://www.justiceinspectorates.gov.uk/hmiprobation/research/the-evidence-base-probation/supervision-of-service-users/assessment/">within two years of release</a>.</p>
<p>Oasys itself is based on the “what works” approach to risk assessment. Supporters of this method say it relies upon “objective evidence” of <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/449357/research-analysis-offender-assessment-system.pdf">what is effective in reducing reoffending</a>. “What works” introduced some basic principles of risk assessment and rehabilitation and it gained currency with governments around the world in the 1990s.</p>
<p>Risk factors can include “criminogenic needs” – these are factors in an offender’s life that are directly related to recidivism. Examples include, safe housing, job skills and mental health. The “what works” approach is based on several principles, one of which involves matching appropriate rehabilitation programmes to a person’s criminogenic needs.</p>
<p>So, a person convicted of a sex crime, with a history of alcohol abuse, might be given a sentence plan that includes a sex offender treatment programme and drug treatment. This is meant to reduce their likelihood of reoffending.</p>
<p>Following Home Office pilot studies between 1999 and 2001, Oasys was <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/449357/research-analysis-offender-assessment-system.pdf">rolled out nationally</a> and His Majesty’s Prison and Probation Service (HMPPS) have used the technology widely ever since.</p>
<h2>What the algos do – scoring ‘risk’</h2>
<p>The Offender Group Reconviction Scale and variations of Oasys are frequently modified and some information about how they work <a href="https://www.justiceinspectorates.gov.uk/hmiprobation/research/the-evidence-base-probation/supervision-of-service-users/assessment/">is publicly available</a>.</p>
<p>The available information suggests that Oasys is calibrated to predict risk. The algorithms consume the data probation officers obtain during interviews and information in self-assessment questionnaires completed by the person in question. That data is then used to <a href="https://prisonreformtrust.org.uk/adviceguide/offender-management-and-sentence-planning/">score a set of risk factors</a> (criminogenic needs). According to the designers, scientific studies indicate that these needs are <a href="http://pure-oai.bham.ac.uk/ws/files/10614791/CJ_B_revised_Sep_11_postprintcopy.pdf">linked to risks of reoffending</a>.</p>
<figure class="align-center ">
<img alt="An illustration of a machine learning algorithm" src="https://images.theconversation.com/files/536874/original/file-20230711-27-9frkhw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/536874/original/file-20230711-27-9frkhw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=337&fit=crop&dpr=1 600w, https://images.theconversation.com/files/536874/original/file-20230711-27-9frkhw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=337&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/536874/original/file-20230711-27-9frkhw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=337&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/536874/original/file-20230711-27-9frkhw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/536874/original/file-20230711-27-9frkhw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/536874/original/file-20230711-27-9frkhw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Machine learning algorithms are a ‘black box’.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/cyber-security-concept-learning-algorithms-analysis-1408742921">Shutterstock/yours</a></span>
</figcaption>
</figure>
<p>The risk factors include static (unchangeable) things such as criminal history and age. But they also comprise dynamic (changeable) factors. In Oasys, <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/815078/Oasys-needs-adhoc-stats.pdf">dynamic factors include</a>: accommodation, employability, relationships, lifestyle, drugs misuse, alcohol misuse, thinking and behaviour, and attitudes. Different weights are assigned to different risk factors as some factors are said to have <a href="https://www.cep-probation.org/wp-content/uploads/2018/11/Presentation-Recent-thinking-results-from-Oasys1.pdf">greater or lesser predictive ability</a>.</p>
<p>So what type of data is obtained from the person being risk assessed? Oasys has 12 sections. Two sections concern criminal history and the current offence. The other ten address areas related to needs and risk. <a href="https://www.researchgate.net/publication/236786501_Negotiated_Risk_Actuarial_Illusions_and_Discretion_in_Probation">Probation officers use discretion</a> in scoring many of the dynamic risk factors.</p>
<h2>The person becomes a set of numbers</h2>
<p>The probation officer may, for example, judge whether the person has “suitable accommodation”, which could require considering such things as safety, difficulties with neighbours, available amenities and whether the space is overcrowded. The officer will determine whether the person has a drinking problem or if impulsivity is an issue. These judgments can increase the person’s “risk profile”. In other words, a probation officer may consider dynamic risk factors like having no fixed address and having a history of drug abuse, and say that the person poses a higher risk of reoffending.</p>
<p>The algorithms assess the probation officers’ entries and produce numeric risk scores: the person becomes a set of numbers.</p>
<p>These numbers are then recombined and placed into low-, medium-, high-, and very <a href="http://pure-oai.bham.ac.uk/ws/files/10614791/CJ_B_revised_Sep_11_postprintcopy.pdf">high-risk categories</a>. The system may also associate the category with a percentage indicating the proportion of people <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/449357/research-analysis-offender-assessment-system.pdf">who reoffended in the past</a>. </p>
<p>However, there is simply no specific guidance on how to translate any of the risk of reoffending scores into actual sentencing decisions. Probation officers conduct the assessments and they form part of the pre-sentence report (PSR) they present to the court along with a recommended intervention. But it is left to the court to determine a sentence, in line with the provisions of <a href="https://www.sentencingcouncil.org.uk/">the Sentencing Council</a>.</p>
<p>There is no dataset available to us that directly links Oasys predictions to the decisions they are meant to inform. Hence, we cannot know what decision-makers are doing with these scores in practice.</p>
<p>The situation is muddier considering that multiple risk tools put out results in different ratings (as in high, medium, or low) for the same individual. That’s because the algorithms are predicting different offence types (general, violent, contact sexual and indecent images).</p>
<p>So a person can collect several different ratings. It could be the person is labelled high risk of any reoffending, medium risk of violent offending, and low risk of both sexual offending types. What is a judge to do with these seemingly disparate pieces of data? Probation officers provide some recommendations but the decision is ultimately left to the judge.</p>
<h2>Impact on workloads and risk aversion</h2>
<p>Another issue is that probation officers have been known to struggle with completing Oasys assessments considering the significant amount of time it takes for each person. In 2006, <a href="https://journals.sagepub.com/doi/pdf/10.1177/0264550506060861">researchers spoke to 180 probation officers</a> and asked them about their views on Oasys. One probation officer called it “the worst tax form you’ve ever seen”. </p>
<p>In a different study, another probation officer said Oasys was an arduous and time-intensive “<a href="https://journals.sagepub.com/doi/full/10.1177/09500170211003825">box-ticking exercise</a>”. </p>
<p>What can also happen is that risk-aversion becomes entrenched in the system due to the fear of getting it wrong. The backlash can be swift and severe if a person assessed as low risk commits a serious offence - there have been many high-profile media scandals that prove this. <a href="https://prisonreformtrust.org.uk/wp-content/uploads/2022/09/Making_progress.pdf">In a report for the Prison Reform Trust</a>, one long-term prisoner commented: </p>
<blockquote>
<p>They repeatedly go on about ‘risk’ but I realised many years ago that this has nothing to do with risk … it’s all about accountability, they want someone to blame should it all go wrong.</p>
</blockquote>
<p>The fear of being blamed is not an idle one. A probation officer was reportedly <a href="https://www.theguardian.com/uk-news/2022/dec/22/probation-officer-who-assessed-killamarsh-murderer-reportedly-sacked">sacked in 2022 for gross misconduct</a> for rating Damien Bendall as medium risk rather than high risk after a conviction for arson. Bendall was released with a suspended sentence. Within three months, he murdered his pregnant partner and three children.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1606139086141640706"}"></div></p>
<p>Jordan McSweeney, another convicted murderer, was released from prison in 2022 with an <a href="https://www.justiceinspectorates.gov.uk/hmiprobation/wp-content/uploads/sites/5/2023/01/FINAL-JM-report-HMI-Probation.pdf">assessment of medium risk</a>. Three days later, he raped and brutally killed a young woman walking home alone. <a href="https://www.justiceinspectorates.gov.uk/hmiprobation/media/press-releases/2023/01/jmsfor/">A review of the case</a> determined that he had been incorrectly assessed and should instead have been labelled high risk. </p>
<p>But unlike in the Bendall case where an individual probation officer was apparantly blamed, the chief inspector of probation, Justin Russell, explained:</p>
<blockquote>
<p>Probation staff involved were … experiencing unmanageable workloads made worse by high staff vacancy rates – something we have increasingly seen in our local inspections of services. Prison and probation services didn’t communicate effectively about McSweeney’s risks, leaving the Probation Service with an incomplete picture of someone who was likely to reoffend.</p>
</blockquote>
<h2>‘Bias in, bias out’</h2>
<p>Despite its widespread use there has been no independent audit examining the kind of data Oasys relies on to come to its decisions. And that could be a problem - particularly for people from minority ethnic backgrounds.</p>
<p>That’s because Oasys, directly and indirectly, incorporates socio-demographic data into its tools.</p>
<p>AI systems, like Oasys, rely on arrest data as proxies for crime when they could in some cases be proxies for racially biased law enforcement (and there are plenty of examples <a href="https://theconversation.com/stephen-lawrence-murder-what-new-suspect-adds-to-our-understanding-of-this-landmark-case-208513">in the UK</a> and <a href="https://theconversation.com/george-floyd-why-the-sight-of-these-brave-exhausted-protesters-gives-me-hope-139804">around the world</a> of that). Predicting risks of reoffending on the basis of such data raises serious ethical questions. This is because racially biased policing can permeate the data, ultimately biasing predictions and creating the proverbial “<a href="https://www.yalelawjournal.org/article/bias-in-bias-out">bias in, bias out</a>” problem.</p>
<p>In this way, criminal history records open up avenues for labelling and punishing people according to <a href="https://www.equalityhumanrights.com/en/equality-act/protected-characteristics">protected characteristics</a>, like race, giving rise to racially biased outcomes. This could mean, for example, a higher percentage of minorities rated in the higher risk groups than non-minorities.</p>
<p>Another source of bias could stem from <a href="https://www.justiceinspectorates.gov.uk/hmiprobation/inspections/race-equality-in-probation/">the way officers “rate” ethnic minorities</a> when answering Oasys-led questions. Probation officers may assess minority ethnic people differently on questions such as, whether they have a temper control problem, are impulsive, hold pro-criminal attitudes, or recognise the impact of their offending on others. <a href="https://theconversation.com/prince-harry-is-wrong-unconscious-bias-is-not-different-to-racism-198103">Unconscious biases</a> could be at play here resulting from cultural differences in how various ethnic groups perceive these issues. For instance, people from one cultural background may “see” another person with a bad temper whereas that would be seen as acceptable emotional behaviour in another cultural background.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-can-black-people-feel-safe-and-have-confidence-in-policing-191521">How can black people feel safe and have confidence in policing?</a>
</strong>
</em>
</p>
<hr>
<p>In its <a href="https://lordslibrary.parliament.uk/ai-technology-and-the-justice-system-lords-committee-report/#:%7E:text=1.-,What%20were%20the%20committee's%20findings%3F,solving%20in%20the%20justice%20system.">review of AI in the justice system</a> in 2022, the justice and home affairs committee of the House of Lords noted that there are “concerns about the dangers of human bias contained in the original data being reflected, and further embedded, in decisions made by algorithms”.</p>
<p>And it’s not just the UK where such issues have arisen. The problem of racial bias in justice systems <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/643001/lammy-review-final-report.pdf">has been noted in various countries</a> where risk assessment algorithms similar to Oasys are deployed. </p>
<p>In the US, the <a href="https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm">Compas</a> and <a href="https://theconversation.com/criminal-justice-algorithms-being-race-neutral-doesnt-mean-race-blind-177120">Pattern</a> algorithms are used widely, and <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9755051/pdf/main.pdf">the Level of Service family of tools</a> have been taken up in Australia and Canada.</p>
<p>The Compas system, for instance, is an AI algorithm used by US judges to make decisions on granting bail and sentencing. <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">An investigation claimed</a> that the system generated “false positives” for black people and “false negatives” for white people. In other words, it suggested that black people would reoffend when, in reality, they did not and suggested that white people would not reoffend when they actually did. But the developer of the system has challenged these claims.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-why-installing-robot-judges-in-courtrooms-is-a-really-bad-idea-208718">AI: why installing 'robot judges' in courtrooms is a really bad idea</a>
</strong>
</em>
</p>
<hr>
<p>Studies suggest that such outcomes stem from racially biased decision making embedded in the data which the developers select to represent the <a href="https://www.technologyreview.com/2019/10/17/75285/ai-fairer-than-judge-criminal-risk-assessment-algorithm/">risk factors that will determine the algorithm’s predictions</a>. Criminal history data, such as police arrest records, is one example.</p>
<p>Other socio-economic data that developers select to represent risk factors may also be problematic. People will score as being higher risk if they do not have suitable accommodation or are unemployed. In other words, if you are <a href="https://academic.oup.com/bjc/article/60/4/1080/5751792">poor or disadvantaged</a> the system is stacked against you.</p>
<p>People are also classed as “high risk” for personal circumstances which are sometimes beyond their control. Risk factors include “not having a good relationship with a partner” and “undergoing psychiatric treatment”.</p>
<p>Meanwhile, a report <a href="https://www.justiceinspectorates.gov.uk/hmiprobation/wp-content/uploads/sites/5/2021/12/Academic-Insights-Kemshall-1.pdf">issued by Her Majesty’s Inspectorate of Probation</a> in 2021 alludes to the problem of conscious and unconscious biases which can enter the process via probation officers’ assessments, thereby infecting the outcomes.</p>
<p>More transparency could be useful for tracking when and how probation officer discretion has potentially tainted the final assessment, which could have resulted in people being incarcerated unnecessarily or being allocated inappropriate treatment programmes. This could result <a href="https://www.justiceinspectorates.gov.uk/hmiprobation/wp-content/uploads/sites/5/2021/12/Academic-Insights-Kemshall-1.pdf">in flawed risk predictions</a>.</p>
<p>For example, <a href="https://www.justiceinspectorates.gov.uk/hmiprobation/wp-content/uploads/sites/5/2021/12/Academic-Insights-Kemshall-1.pdf">the report states</a>:</p>
<blockquote>
<p>It is impossible to be free from bias. How we think about the world and consider risk is intrinsically tied up with our emotions, values and tolerance (or otherwise) of risk challenges.</p>
</blockquote>
<h2>Social engineering?</h2>
<p>Miklos Orban, visiting professor at the University of Surrey School of Law, recently engaged with the Ministry of Justice seeking information on Oasys. One of us (Melissa) spoke with Orban about this and he expressed concerns that the system might be a form of social engineering.</p>
<p>He said that governmental officials were eliciting personal and sensitive information from defendants who may think they are making these disclosures to get help or sympathy. But the officers may instead use them for another purpose, such as labelling them with a drinking or drugs problem and then requiring them to go on a suitable treatment programme. He said: </p>
<blockquote>
<p>As a convict, you know very little of how risk assessment tools work, and I have my doubts as to how well judges and parole officers understand statistical models like Oasys. And that’s my number one concern.</p>
</blockquote>
<figure class="align-center ">
<img alt="Statue of the scale of justice" src="https://images.theconversation.com/files/536873/original/file-20230711-21-unn1e6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/536873/original/file-20230711-21-unn1e6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/536873/original/file-20230711-21-unn1e6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/536873/original/file-20230711-21-unn1e6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/536873/original/file-20230711-21-unn1e6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/536873/original/file-20230711-21-unn1e6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/536873/original/file-20230711-21-unn1e6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Do all sections of society get the same justice?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/statue-justice-380912410">Shutterstock/Michal Kalasek</a></span>
</figcaption>
</figure>
<p>Not much is known about the accuracy of Oasys in relation to gender and ethnicity either. <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/449357/research-analysis-offender-assessment-system.pdf">One available study</a> (though a bit dated as it looked at a sample from 2007) shows the non-violent and violent predictive tools are less accurate with women and minority ethnic people.</p>
<p>Meanwhile, Justice, a legal reform organisation, recently <a href="https://files.justice.org.uk/wp-content/uploads/2022/03/22164155/JUSTICE-A-Parole-System-fit-for-Purpose-20-Jan-2022.pdf">cited a lack of research</a> on the accuracy of these tools for women and trans prisoners.</p>
<p>In terms of racial bias, <a href="https://www.justiceinspectorates.gov.uk/hmiprisons/wp-content/uploads/sites/4/2020/10/Minority-ethnic-prisoners-and-rehabilitation-2020-web-1.pdf">an HM Inspectorate of Prisons’ audit</a> found that an Oasys assessment had not been completed or reviewed in the prior year for almost 20% of black and minority ethnic prisoners.</p>
<p>This is a serious issue because further evaluation can help ensure that minority ethnic people are receiving similar treatment or being assigned to helpful programming. It can avoid probation officers simply assuming the risk status of minority ethnic people is unchangeable and thus reduce their chances of early release since Oasys assessments are required to ascertain whether interventions have <a href="https://www.justiceinspectorates.gov.uk/hmiprisons/wp-content/uploads/sites/4/2020/10/Minority-ethnic-prisoners-and-rehabilitation-2020-web-1.pdf">reduced risks of reoffending</a>.</p>
<p><a href="https://mmuperu.co.uk/bjcj/articles/race-equality-in-probation-services-in-england-and-wales-a-procedural-justice-perspective/">Researchers with the Inspectorate of Probation</a> encouraged designers of Oasys to expand the ways it can incorporate a person’s personal experiences with discrimination and how it may impact their relationship with the criminal justice system. But, so far, and to the best of our knowledge, this has not been done. </p>
<h2>Algorithms affect real people</h2>
<p>Oasys results follow a person’s path through the criminal justice system and could influence key decisions from sentencing to parole eligibility.</p>
<p>Such serious decisions have huge consequences on peoples’ lives. Yet officials can decline to disclose Oasys results to the defendant in question if they are thought to contain “sensitive information”. They can ask and be shown their completed assessment, but they are <a href="https://prisonreformtrust.org.uk/adviceguide/offender-management-and-sentence-planning/">not guaranteed to see it</a>.</p>
<p>Even if they are given their scores, defendants and their lawyers face significant hurdles in understanding and challenging their assessments. There is no legal obligation to publish information about the system, although the Ministry of Justice has commendably <a href="https://www.cep-probation.org/wp-content/uploads/2018/10/Debdin-Compendium-of-Oasys-research.pdf">made certain information public</a>.</p>
<p>Still, even if more data were released, defence lawyers may not have the scientific <a href="https://www.nacdl.org/Document/RiskAssessmentReport">skills to examine the assessments</a> with a sufficiently critical eye. </p>
<p>Some prisoners describe additional challenges. They complain that their risk scores do not reflect how they see themselves. Others believe that their <a href="https://journals.sagepub.com/doi/full/10.1177/17488958221098887">scores contain errors</a>. While some also feel that Oasys mislabels them. In another report compiled by the PRT, one prisoner stated: “Oasys is who I was, not who I am now.”</p>
<p>And a man serving a life sentence described the repeated risk assessment when he <a href="https://journals.sagepub.com/doi/full/10.1177/17488958221098887">spoke to a researcher</a> at the University of Birmingham:</p>
<blockquote>
<p>I have likened it to a small snowball running downhill. Each turn it picks up more and more snow (inaccurate entries) until eventually you are left with this massive snowball which bears no semblance to the original small ball of snow. In other words, I no longer exist. I have become a construct of their imagination. It is the ultimate act of dehumanisation.</p>
</blockquote>
<p>Not all judicial officers are impressed either. When asked about <a href="https://core.ac.uk/download/pdf/213020402.pdf">using a risk assessment tool</a> that the state required, a judge in the US said: “Frankly, I pay very little attention to the worksheets. Attorneys argue about them, but I really just look at the guidelines. I also don’t go to psychics.”</p>
<p>There have been relatively few legal challenges to any of the risk assessment algorithms in use across the world. </p>
<p>But one case stands as an outlier. In 2018, the <a href="https://www.hrlc.org.au/human-rights-case-summaries/2018/12/17/supreme-court-of-canada-rules-use-of-psychological-risk-assessment-tools-on-indigenous-offenders-illegal">Supreme Court of Canada ruled</a> in the case of Ewert v Canada that it was unlawful for the prison system to use a predictive algorithm (not Oasys) on Indigenous inmates.</p>
<p>Ewert was an Indigenous Canadian serving time in prison for murder and attempted murder. He challenged the prison system’s use of an AI tool to assess his risk of recidivism. </p>
<p>The problem was the lack of evidence that the particular tool was sufficiently accurate when applied to the Indigenous population in Canada. In other words, the tool had never been tested on Indigenous Canadians. </p>
<p>The court understood that there might be risk-relevant differences between Indigenous and non-Indigenous peoples as to why they commit crimes. But since the algorithms had not been tested on Indigenous people, its accuracy for that population was not known. Therefore, using the tools to assess their risks violated the legal requirement that information about an offender must be accurate before it can be used for decision making. </p>
<p>The court also noted that the over-representation of Indigenous people in the Canadian justice system was in part attributable to discriminatory policies. </p>
<h2>Individual vs group risk</h2>
<p>The feeling that the scores produced by risk assessment algorithms such as Oasys may not be properly <a href="https://prisonreformtrust.org.uk/wp-content/uploads/2022/09/Making_progress.pdf">personalised or contextualised</a> finds merit when considering how predictive algorithms in general work.</p>
<p>They assess people and produce risk scores and this has a longer history in business. The lending industry <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/819055/Landscape_Summary_-_Bias_in_Algorithmic_Decision-Making.pdf">uses algorithms to assess</a> the creditworthiness of customers. Insurance companies deploy algorithms to generate quotes for car insurance. The insurance algorithms often use driving records, age and gender to determine the likelihood of claiming against the policy.</p>
<p>But an all too common and mistaken assumption is that algorithms can provide a prediction about the specific person. On the contrary, publicly available information shows that the algorithms <a href="https://theconversation.com/we-use-big-data-to-sentence-criminals-but-can-the-algorithms-really-tell-us-what-we-need-to-know-77931">rely upon statistical groups</a>.</p>
<figure class="align-center ">
<img alt="3D illustration of a blue figure in a group of white figures" src="https://images.theconversation.com/files/536876/original/file-20230711-9022-fo4xfu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/536876/original/file-20230711-9022-fo4xfu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=257&fit=crop&dpr=1 600w, https://images.theconversation.com/files/536876/original/file-20230711-9022-fo4xfu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=257&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/536876/original/file-20230711-9022-fo4xfu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=257&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/536876/original/file-20230711-9022-fo4xfu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=323&fit=crop&dpr=1 754w, https://images.theconversation.com/files/536876/original/file-20230711-9022-fo4xfu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=323&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/536876/original/file-20230711-9022-fo4xfu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=323&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Can a person be held responsible for the actions of a group?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/blue-individual-crowd-concept-leadership-excellence-1433701493">shutterstock/peterschreiber.media</a></span>
</figcaption>
</figure>
<p>What does this mean? As we said earlier, they compare the circumstances and attributes of <a href="http://pure-oai.bham.ac.uk/ws/files/10614791/CJ_B_revised_Sep_11_postprintcopy.pdf">the person being risk assessed</a> with risk factors and scores associated with criminal justice populations – or groups.</p>
<p>For example, what if “John” is placed in the medium-risk category, which is associated with a reoffending likelihood of 30%? This does not mean there is a 30% chance that John will reoffend. Instead, it means that about 30% of those assigned medium risk are forecasted to reoffend based on the observation that 30% of the medium risk had in the past been reconvicted.</p>
<p>This number cannot be directly assigned to any individual within that medium-risk group. John may, individually, have a 1% chance of reoffending. The scales are not individualised in this way and so John, himself, cannot be assigned specifically with a number.</p>
<p>The reason for this is that the predictive factors are <a href="https://amplitude.com/blog/causation-correlation">not causal in nature</a>. They are correlated, meaning there may be some relationship between the factors and reoffending. Oasys uses male gender as one of the predictive factors of reoffending. But being male does not cause reoffending. The relationship as perceived by Oasys merely suggests that males are more likely to commit crimes than females.</p>
<p>There are important consequences to this. The individual can thereby be seen as being punished, not for what he or she is personally predicted to do. They face imprisonment because of what others – who share a similar risk score – have done.</p>
<p>This is why more transparency of predictive algorithms is needed.</p>
<p>But even if we know what the inputs are, the weighting system is often obscure as well. And developers are frequently changing the algorithms for a host of reasons. The purposes may be valid. It could be that predictors of reoffending change over time in connection with societal shifts. Or it could be that new scientific knowledge suggests a modification is necessary.</p>
<p>Nevertheless, we have been unable to discover much about how well the Oasys system, or its components, performs. The Ministry of Justice has, to our knowledge, only <a href="http://www.bl.uk/collection-items/compendium-of-research-and-analysis-on-the-offender-assessment-system-Oasys-20092013">released retroactive results</a>. </p>
<p>Those statistics cannot inform on the predictive performance of the tool for predictions made today, or for how accurate they are when we relook at the offenders in two years. Frequent retrospective results are needed to provide up to date information on the performance of algorithms.</p>
<h2>Independent evaluation</h2>
<p>To the best of our knowledge (and to the knowledge of other experts in the field), Oasys has <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9755051/pdf/main.pdf">not been independently evaluated</a>. There is a clear need for more information on the effectiveness and accuracy of these tools and their impact on gender, race, disability <a href="https://files.justice.org.uk/wp-content/uploads/2022/03/22164155/JUSTICE-A-Parole-System-fit-for-Purpose-20-Jan-2022.pdf">and other protected characteristics</a>. Without these sources it is not possible to fully understand the prospects and challenges of the system.</p>
<p>We acknowledge that the lack of transparency surrounding Oasys is a common, though not universal, denominator that unites these types of algorithms deployed by justice systems and other sectors across the world. <a href="https://case-law.vlex.com/vid/state-v-loomis-no-888404547">A court case in the state of Wisconsin</a> that challenged the use of a risk assessment tool that the developer claimed was confidential succeeded only to a point.</p>
<p>The defendant, convicted of charges related to a drive-by shooting, claimed that it was unfair to use a tool which used a private algorithm because it prevented him from challenging its scientific credentials. The US court ruled that the government did not have to reveal the underlying algorithm. However, it required authorities to issue warnings when the tool was used. </p>
<p>These warnings included: </p>
<ul>
<li><p>the fact that failure to disclose meant it was not possible to tell how scores were determined</p></li>
<li><p>the algorithms were group-based assessments incapable of individualised predictions</p></li>
<li><p>there could be biases toward minority ethnic people</p></li>
<li><p>the tool had not been tested for use in the state of Wisconsin.</p></li>
</ul>
<h2>Opening up the black box</h2>
<p>Problems such as AI bias and lack of transparency are not peculiar to Oasys. They affect many other data-driven technologies deployed by public sector agencies. </p>
<p>In response, UK government agencies, such as the <a href="https://www.gov.uk/government/organisations/central-digital-and-data-office">Central Digital and Data Office</a> and the <a href="https://www.gov.uk/government/organisations/centre-for-data-ethics-and-innovation">Centre for Data Ethics and Innovation (CDEI)</a> have recognised the need for ethical approaches to algorithm design and implementation and have introduced remedial strategies.</p>
<p>A recent example is the <a href="https://www.gov.uk/government/collections/algorithmic-transparency-recording-standard-hub">Algorithmic Transparency Recording Standard Hub</a> which offers public sector organisations the opportunity to provide information about their algorithms.</p>
<p>A relatively recent <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/819055/Landscape_Summary_-_Bias_in_Algorithmic_Decision-Making.pdf">report published by the CDEI</a> also discussed bias-limitation measures, such as reducing the significance of things like arrest history as they have been show to be negative proxies for race.</p>
<p>A post-prediction remedy in the CDEI report requires practitioners to lower the risk classification allocated to people belonging to a group known to be <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/819055/Landscape_Summary_-_Bias_in_Algorithmic_Decision-Making.pdf">consistently vulnerable to higher risk AI scores</a> than others.</p>
<p>More generally, researchers and civil society organisations have proposed <a href="https://dl.acm.org/doi/10.1145/3351095.3372873">pre and-post implementation audits</a> to test, detect and resolve AI problems of the kind associated with Oasys.</p>
<p>The need for appropriate regulation of AI systems including those deployed for risk assessment has also been recognised by <a href="https://www.gov.uk/government/publications/findings-from-the-drcf-algorithmic-processing-workstream-spring-2022/the-benefits-and-harms-of-algorithms-a-shared-perspective-from-the-four-digital-regulators">key regulatory bodies</a> in the UK and around the world, such as Ofcom, the Information Commissioner’s Office and the Competition and Markets Authority.</p>
<p>When we put these issues to the MOJ, it said the system had been subject to external review, but it was not specific on the data. It said it has been making data available externally through the <a href="https://www.gov.uk/guidance/ministry-of-justice-data-first">Data First</a> programme and that the next dataset to be shared with the programme will be “based on” the Oasys database and released “within 12 months”.</p>
<p>An MOJ spokesperson added: “The Oasys system has been subject to external review and scrutiny by the appropriate bodies. For obvious reasons, granting external access to sensitive offender information is a complex process, which is why we’ve set up Data First which allows accredited researchers to access our information in an ethical and responsible way.”</p>
<p>In the end, we recognise that algorithmic systems are here to stay and we acknowledge the ongoing efforts to reduce problems with accuracy and bias.</p>
<p>Better access to, and input from, external experts to evaluate these systems and put forward solutions would be a useful step towards making them fairer.</p>
<p>The justice system is vast and complex and technology is needed to manage it. But it is important to remember that there are people behind the numbers. </p>
<hr>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=112&fit=crop&dpr=1 600w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=112&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=112&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=140&fit=crop&dpr=1 754w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=140&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=140&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em>For you: more from our <a href="https://theconversation.com/uk/topics/insights-series-71218?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">Insights series</a>:</em></p>
<ul>
<li><p><em><a href="https://theconversation.com/the-melting-arctic-is-a-crime-scene-the-microbes-i-study-have-long-warned-us-of-this-catastrophe-but-they-are-also-driving-it-207785">The melting Arctic is a crime scene. The microbes I study have long warned us of this catastrophe – but they are also driving it
</a></em></p></li>
<li><p><em><a href="https://theconversation.com/beatrix-potters-famous-tales-are-rooted-in-stories-told-by-enslaved-africans-but-she-was-very-quiet-about-their-origins-202274">Beatrix Potter’s famous tales are rooted in stories told by enslaved Africans – but she was very quiet about their origins
</a></em></p></li>
<li><p><em><a href="https://theconversation.com/invisible-windrush-how-the-stories-of-indian-indentured-labourers-from-the-caribbean-were-forgotten-206330">Invisible Windrush: how the stories of Indian indentured labourers from the Caribbean were forgotten
</a></em></p></li>
</ul>
<p><em>To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. <a href="https://theconversation.com/uk/newsletters/the-daily-newsletter-2?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK"><strong>Subscribe to our newsletter</strong></a>.</em></p><img src="https://counter.theconversation.com/content/200594/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>‘I no longer exist, I have become a construct of their imagination. It is the ultimate act of dehumanisation.’Melissa Hamilton, Professor of Law & Criminal Justice, University of SurreyPamela Ugwudike, Professor of Criminology, University of SouthamptonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2074802023-06-23T12:28:05Z2023-06-23T12:28:05ZThe folly of making art with text-to-image generative AI<figure><img src="https://images.theconversation.com/files/533577/original/file-20230622-5172-et0jx.png?ixlib=rb-1.1.0&rect=334%2C162%2C1369%2C838&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Obtaining a desired image can be a long exercise in trial and error.</span> <span class="attribution"><a class="source" href="https://i0.wp.com/syncedreview.com/wp-content/uploads/2021/12/image-92.png?resize=1153%2C580&ssl=1">OpenAI</a></span></figcaption></figure><p>Making art using artificial intelligence isn’t new. <a href="https://news.artnet.com/art-world/artificial-intelligence-art-history-2045520">It’s as old as AI itself</a>. </p>
<p>What’s new is that a wave of tools now let most people generate images by entering a text prompt. All you need to do is write “a landscape in the style of van Gogh” into a text box, and the AI can create a beautiful image as instructed. </p>
<p>The power of this technology lies in its capacity to use human language to control art generation. But do these systems accurately translate an artist’s vision? Can bringing language into art-making truly lead to artistic breakthroughs? </p>
<h2>Engineering outputs</h2>
<p>I’ve worked with generative AI <a href="https://scholar.google.com/citations?user=DxQiCiIAAAAJ&hl=en">as an artist and computer scientist</a> for years, and I would argue that this new type of tool constrains the creative process. </p>
<p>When you write a text prompt to generate an image with AI, there are infinite possibilities. If you’re a casual user, you might be happy with what AI generates for you. And startups and investors <a href="https://www.cnbc.com/2022/10/08/generative-ai-silicon-valleys-next-trillion-dollar-companies.html">have poured billions</a> into this technology, seeing it as an easy way to generate graphics for articles, video game characters and advertisements.</p>
<figure class="align-center ">
<img alt="Grid of many images of cartoon women in various costumes." src="https://images.theconversation.com/files/533578/original/file-20230622-19-fg7z51.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/533578/original/file-20230622-19-fg7z51.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=285&fit=crop&dpr=1 600w, https://images.theconversation.com/files/533578/original/file-20230622-19-fg7z51.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=285&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/533578/original/file-20230622-19-fg7z51.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=285&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/533578/original/file-20230622-19-fg7z51.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=358&fit=crop&dpr=1 754w, https://images.theconversation.com/files/533578/original/file-20230622-19-fg7z51.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=358&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/533578/original/file-20230622-19-fg7z51.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=358&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Generative AI is seen as a promising tool for coming up with video game characters.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/c/cc/X-Y_plot_of_algorithmically-generated_AI_art_by_different_science-fiction_subgenres.png">Benlisquare/Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>In contrast, an artist might need to write an essaylike prompt to generate a high-quality image that reflects their vision – with the right composition, the right lighting and the correct shading. That long prompt is not necessarily descriptive of the image but typically uses lots of keywords to invoke the system of what’s in the artist’s mind. There’s a relatively new term for this: <a href="https://time.com/6272103/ai-prompt-engineer-job/">prompt engineering</a>.</p>
<p>Basically, the role of an artist using these tools is reduced to reverse-engineering the system to find the right keywords to compel the system to generate the desired output. It takes a lot of effort, and much trial and error, to find the right words.</p>
<h2>AI isn’t as intelligent as it seems</h2>
<p>To learn how to better control the outputs, it’s important to recognize that most of these systems <a href="https://theconversation.com/generative-ai-is-a-minefield-for-copyright-law-207473">are trained on images and captions from the internet</a>. </p>
<p>Think about what a typical image caption tells about an image. Captions are typically written to complement the visual experience in web browsing. </p>
<p>For example, the caption might describe the name of the photographer and the copyright holder. On some websites, like Flickr, a caption typically describes the type of camera and the lens used. On other sites, the caption describes the graphic engine and hardware used to render an image. </p>
<p>So to write a useful text prompt, users need to insert many nondescriptive keywords for the AI system to create a corresponding image.</p>
<p>Today’s AI systems are not as intelligent as they seem; they are essentially smart retrieval systems that have a huge memory and work by association.</p>
<h2>Artists frustrated by a lack of control</h2>
<p>Is this really the sort of tool that can help artists create great work? </p>
<p>At Playform AI, a generative AI art platform that I founded, we <a href="https://www.playform.io/editorial/survey">conducted a survey</a> to better understand artists’ experiences with generative AI. We collected responses from over 500 digital artists, traditional painters, photographers, illustrators and graphic designers who had used platforms such as DALL-E, Stable Diffusion and Midjourney, among others. </p>
<p>Only 46% of the respondents found such tools to be “very useful,” while 32% found them somewhat useful but couldn’t integrate them to their workflow. The rest of the users – 22% – didn’t find them useful at all. </p>
<p>The main limitation artists and designers highlighted was a lack of control. On a scale 0 to 10, with 10 being most control, respondents described their ability to control the outcome to be between 4 and 5. Half the respondents found the outputs interesting, but not of a high enough quality to be used in their practice. </p>
<p>When it came to beliefs about whether generative AI would influence their practice, 90% of the artists surveyed thought that it would; 46% believed that the effect would be a positive one, with 7% predicting that it would have a negative effect. And 37% thought their practice would be affected but weren’t sure in what way. </p>
<h2>The best visual art transcends language</h2>
<p>Are these limitations fundamental, or will they just go away as the technology improves? </p>
<p>Of course, newer versions of generative AI will give users more control over outputs, along with higher resolutions and better image quality. </p>
<p>But to me, the main limitation, as far as art is concerned, is foundational: it’s the process of using language as the main driver in generating the image. </p>
<p>Visual artists, by definition, are <a href="https://psmag.com/news/the-thinking-process-of-the-visual-artist">visual thinkers</a>. When they imagine their work, they usually draw from visual references, not words – a memory, a collection of photographs or other art they’ve encountered. </p>
<p>When language is in the driver’s seat of image generation, I see an extra barrier between the artist and the digital canvas. Pixels will be rendered only through the lens of language. Artists lose the freedom of manipulating pixels outside the boundaries of semantics.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/533559/original/file-20230622-5432-utxd4p.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Grid of different cartoon images of an animal with wings." src="https://images.theconversation.com/files/533559/original/file-20230622-5432-utxd4p.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/533559/original/file-20230622-5432-utxd4p.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=721&fit=crop&dpr=1 600w, https://images.theconversation.com/files/533559/original/file-20230622-5432-utxd4p.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=721&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/533559/original/file-20230622-5432-utxd4p.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=721&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/533559/original/file-20230622-5432-utxd4p.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=906&fit=crop&dpr=1 754w, https://images.theconversation.com/files/533559/original/file-20230622-5432-utxd4p.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=906&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/533559/original/file-20230622-5432-utxd4p.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=906&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The same input can lead to a range of random outputs.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/a/a3/DALL-E_sample.png">OpenAI/Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>There’s another fundamental limitation in text-to-image technology.</p>
<p>If two artists enter the exact same prompt, it’s very unlikely that the system will generate the same image. That’s not due to anything the artist did; the different outcomes are simply due the AI’s <a href="https://lilianweng.github.io/posts/2021-07-11-diffusion-models/">starting from different random initial images</a>. </p>
<p>In other words, the artist’s output is boiled down to chance.</p>
<p>Nearly two-thirds of the artists we surveyed had concerns that their AI generations might be similar to other artists’ works and that the technology does not reflect their identity – or even replaces it altogether.</p>
<p>The issue of artist identity is crucial when it comes to making and recognizing art. In the 19th century, when photography started to become popular, there was <a href="https://theconversation.com/generative-ai-is-a-minefield-for-copyright-law-207473">a debate about whether photography was a form of art</a>. It came down to a court case in France in 1861 to decide whether photography could be copyrighted as an art form. The decision hinged on whether an artist’s unique identity could be expressed through photographs. </p>
<p>Those same questions emerge when considering AI systems that are taught with the internet’s existing images. </p>
<p>Before the emergence of text-to-image prompting, <a href="https://theconversation.com/when-the-line-between-machine-and-artist-becomes-blurred-103149">creating art with AI was a more elaborate process</a>: Artists usually trained their own AI models based on their own images. That allowed them to use their own work as visual references and retain more control over the outputs, which better reflected their unique style.</p>
<p>Text-to-image tools might be useful for certain creators and casual everyday users who want to create graphics for a work presentation or a social media post. </p>
<p>But when it comes to art, I can’t see how text-to-image software can adequately reflect the artist’s true intentions or capture the beauty and emotional resonance or works that grip viewers and makes them see the world anew.</p><img src="https://counter.theconversation.com/content/207480/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The author is the founder of Playform AI</span></em></p>Visual artists draw from visual references, not words, as they imagine their work. So when language is in the driver’s seat of making art, it erects a barrier between the artist and the canvas.Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2050692023-06-05T14:25:30Z2023-06-05T14:25:30ZHepatitis B is a life-threatening liver infection – our machine learning tool could help with early detection<figure><img src="https://images.theconversation.com/files/529046/original/file-20230530-15-9wf3s7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Getty Images</span></span></figcaption></figure><p>More than <a href="https://www.who.int/news-room/fact-sheets/detail/hepatitis-b">296 million people</a> worldwide live with hepatitis B, a potentially life-threatening liver infection caused by the hepatitis B virus (HBV). Most don’t know they are infected, so they don’t get medical care. Clinical care improves the patient’s outcome and can prevent them from infecting others. </p>
<p>Early detection of HBV-infected patients could therefore improve patient prognosis and stop transmission within populations. </p>
<p>The recommended test for HBV is an <a href="https://apps.who.int/iris/bitstream/handle/10665/254621/9789241549981-eng.pdf">enzyme immunoassay</a>. It detects the hepatitis B surface <a href="https://www.britannica.com/science/antigen">antigen</a> – a substance that is a sign of the presence of the virus in the person’s body. </p>
<p>But these chemical tests are very <a href="https://apps.who.int/iris/bitstream/handle/10665/254621/9789241549981-eng.pdf">expensive</a> and need dedicated facilities. They are generally out of reach for people in low-resource settings, where laboratories are few and isolated. Clinicians in these settings work with limited resources against <a href="https://www.afro.who.int/news/91-million-africans-infected-hepatitis-b-or-c">a silent killer</a> that may not show obvious symptoms for decades until the liver is severely damaged. </p>
<p>Part of the solution for public health challenges like this may lie in <a href="https://theconversation.com/what-machine-learning-can-offer-nigerias-healthcare-system-163593">machine learning</a>. This refers to the ability of computers to make sense of large amounts of information – and to build on their own “knowledge”.</p>
<p>We are among a group of researchers at the <a href="https://nceph.anu.edu.au/">Australian National University</a> who study machine learning and infectious disease. Our <a href="https://bmcinfectdis.biomedcentral.com/articles/10.1186/s12879-021-06800-6">earlier research</a> found that the prevalence of HBV in Nigeria was high (9.5%, where anything above 8% is considered high). And the levels of infection varied significantly across geopolitical zones. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/hepatitis-b-in-nigeria-fresh-data-to-inform-prevention-and-care-173018">Hepatitis B in Nigeria: fresh data to inform prevention and care</a>
</strong>
</em>
</p>
<hr>
<p>Access to affordable testing was a problem in the country. So we <a href="https://www.nature.com/articles/s41598-023-30440-2">developed a tool</a> to help clinicians detect hepatitis B infections earlier.</p>
<p>Using Nigerian patient data, we developed an algorithm that learns from the patient data, identifies patterns, and makes intelligent decisions to provide alerts and detection of a patient’s HBV infection status. The aim is to enhance clinical decision-making and improve patient outcomes. Enabling earlier care should give millions of people a better quality of life and help reduce HBV prevalence.</p>
<h2>How did we do the work?</h2>
<p>To build this tool, we worked closely with colleagues at the <a href="https://nimr.gov.ng/">Nigerian Institute of Medical Research</a>. They provided access to data from 916 anonymous patients, in an ethically approved manner. The institute is Nigeria’s foremost medical research institute and it hosts a dedicated hepatitis B clinic.</p>
<p>We used the results of normal blood tests that measure red and white blood cells, salts, enzymes and other blood chemicals, along with results of tests for hepatitis B. Routine blood tests can be very useful in facilitating early diagnosis if the subtle interactions between measurements can be spotted. Patterns of interactions may be a signal of disease. But it’s easy to miss them. </p>
<p>Using the data, we trained an algorithm to identify pathology markers that predict a patient’s HBV infection status. One reason machine learning is so powerful is that it does not require humans to tell the computer which features to identify. Our algorithm sifts through the data to find patterns that are common to patients with HBV infection and then match those patterns in people it has not seen before. </p>
<p>Once validated, the algorithm can be integrated into routine clinical workflow in a real-world clinical setting, as an intelligent decision support system. This will help detect HBV infections earlier, without resorting to expensive immunoassay. </p>
<h2>What did we find?</h2>
<p>For the 916 people in <a href="https://www.nature.com/articles/s41598-023-30440-2">our study</a>, our algorithm could reliably make an intelligent call to accurately predict those infected with HBV. Its discrimination threshold was 90% — indicating that the algorithm was highly accurate.</p>
<p>We then translated this into a user-friendly, web-accessible app to use in further studies. The decision support tool, <a href="https://hepblivetest.app/">Hep B LiveTest</a>, was designed as a prototype.</p>
<p>The tool found that a combination of two enzymes, patient age and white blood cell count was the strongest predictor of HBV infection. The two enzymes are aspartate aminotransferase and alanine aminotransferase. When levels of these in the blood are high, it may indicate potential liver damage. Serum albumin, a liver function marker, was also identified as an important predictive marker of infection.</p>
<p>A <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/jmv.23609">study of Chinese patients</a> showed trends similar to those suggested by our algorithm. Alanine aminotransferase and serum albumin were the most prominent predictors.</p>
<h2>What’s next?</h2>
<p>It is important to recognise the limitations of machine learning. Before a tool like this is put to work in routine clinical practice, it needs to be validated using diverse data. </p>
<p>Our machine learning tool was trained with data from Nigeria, so its performance may be limited to that setting. We are in the process of training our algorithm with more data from other sources and validating its robustness in other settings. This will inform how broadly applicable our algorithm is and how well it might work in other populations – particularly in settings with a low prevalence of hepatitis B infections.</p>
<p>Though our machine learning tool is only a first test, the results are highly encouraging. <a href="https://news.un.org/en/story/2021/07/1096592">A person dies from viral hepatitis B every 30 seconds</a>. We hope to put our system to work soon in the urgent fight against this <a href="https://www.nature.com/articles/d44148-022-00128-2">vaccine-preventable disease</a>. </p>
<p>We believe that machine learning has a role in enhancing the World Health Organization’s targets of <a href="https://apps.who.int/iris/bitstream/handle/10665/246177/WHO-HIV-2016.06-eng.pdf?sequence=1&isAllowed=y">eliminating viral hepatitis as a public health problem by 2030</a>.</p><img src="https://counter.theconversation.com/content/205069/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brett A. Lidbury receives funding from the Quality Use of Pathology Program (QUPP) - Commonwealth Department of Health. He holds a Fellowship with the Royal College of Pathologists of Australasia (RCPA) Faculty of Science, and collaborates with the RCPA Quality Assurance Programme (RCPAQAP). </span></em></p><p class="fine-print"><em><span>Busayo I. Ajuwon does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Machine learning can spot patterns in patient data and help detect hepatitis B earlier, which could save lives.Busayo I. Ajuwon, Research Scientist, Australian National UniversityBrett A. Lidbury, Associate ProfessorLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2037862023-05-30T11:37:27Z2023-05-30T11:37:27ZAnkylosing spondylitis: machine learning could pave the way for early diagnosis of inflammatory arthritis<figure><img src="https://images.theconversation.com/files/526271/original/file-20230515-21229-5sc62o.jpg?ixlib=rb-1.1.0&rect=0%2C18%2C4140%2C3526&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">An X-ray image comparing a healthy spine and one showing signs of ankylosing spondylitis.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/ls-spine-xray-image-ap-view-274780355">Suttha Burawonk/Shutterstock</a></span></figcaption></figure><p><a href="https://www.nhs.uk/conditions/ankylosing-spondylitis/">Ankylosing spondylitis</a> (AS) is the second most common type of inflammatory arthritis, often affecting teenagers and young adults. Symptoms of AS can include back pain, stiffness, joint inflammation (arthritis), inflammation where tendons attach to bones (enthesitis), and fatigue. Over time, these symptoms can lead to spinal fusion, which significantly affects quality of life, particularly in young people.</p>
<p>Unfortunately, diagnosing AS can be a lengthy process, taking up to ten years from the onset of symptoms and usually requiring X-rays. The slow progression of the condition, coupled with the lack of a definitive test, contributes to these delays. </p>
<p>However, early detection of the condition can make a tremendous difference, halting the degenerative process and preserving a good quality of life for those affected.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/unexplained-lower-back-pain-it-could-be-ankylosing-spondylitis-56809">Unexplained lower back pain? It could be ankylosing spondylitis</a>
</strong>
</em>
</p>
<hr>
<p>Our study explored the potential of using routinely collected healthcare data from GPs and hospitals, combined with advanced machine learning techniques, to identify AS at an earlier stage. Machine learning involves using algorithms to analyse sample data, enabling predictions and decisions without explicit programming. </p>
<p>We analysed data separately for men and women, and <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0279076">our findings</a> could transform the way in which GPs detect and diagnose AS.</p>
<h2>A valuable tool</h2>
<p>To conduct our study, we used anonymous data from a national data repository at Swansea University Medical School. Patients with AS were identified and matched with people with no record of a diagnosis.</p>
<p>Our analysis of this data found that factors such as lower back pain, <a href="https://www.nhs.uk/conditions/uveitis/">uveitis</a> (inflammation of the middle layer of the eye), and use of non-steroidal anti-inflammatory drugs before the age of 20 were factors associated with an increased risk of developing AS in men. </p>
<p>In contrast, our model revealed that women tend to experience AS symptoms at a later age, and often rely on multiple pain relief medications compared with men. This possibly indicates a higher likelihood of misdiagnosis of the condition in women.</p>
<p>Machine learning is a valuable tool for profiling and understanding the characteristics of people who are likely to develop AS. It performs well in test data sets with artificially high prevalence rates. </p>
<p>However, when applied to the general population in GPs and hospitals, where AS is rare, even the best model can only achieve a low positive predictive value of 1.4%. (That’s the probability that following a positive test result, the individual will truly have AS.)</p>
<p>So, using multiple models over time may be necessary to narrow down the population and improve this predictive value, which would result in a faster AS diagnosis.</p>
<figure class="align-center ">
<img alt="A person wearing a long sleeved white top faces away from the camera clasping the bottom of their back." src="https://images.theconversation.com/files/526283/original/file-20230515-24356-b8m735.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/526283/original/file-20230515-24356-b8m735.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/526283/original/file-20230515-24356-b8m735.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/526283/original/file-20230515-24356-b8m735.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/526283/original/file-20230515-24356-b8m735.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/526283/original/file-20230515-24356-b8m735.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/526283/original/file-20230515-24356-b8m735.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Ankylosing spondylitis is the second most common cause of inflammatory arthritis.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/herniated-discspondylosis-scoliosis-asian-woman-she-1481955053">jaojormami/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Acknowledge the challenges too</h2>
<p>Machine learning techniques have tremendous potential to improve patient care. But it is also crucial to acknowledge the challenges associated with using these techniques effectively. </p>
<p>These models depend on high-quality data that is diverse and comprehensive to produce reliable, accurate results. But healthcare data can be limited due to privacy concerns, data sensitivity and lack of standardisation. These limitations may therefore compromise the accuracy and reliability of the models.</p>
<p>It’s important to acknowledge that machine learning in relation to this topic is still in its infancy. To develop this further, we will need to gather more detailed data to improve prediction rates and clinical usefulness.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/from-a-deranged-provocateur-to-ibms-failed-ai-superproject-the-controversial-story-of-how-data-has-transformed-healthcare-189362">From a 'deranged' provocateur to IBM's failed AI superproject: the controversial story of how data has transformed healthcare</a>
</strong>
</em>
</p>
<hr>
<p>But our study demonstrates the enormous potential that machine learning has to help identify people with AS and better understand their diagnostic journeys through the health system. </p>
<p>We know that the early detection and diagnosis of AS is crucial to secure the best outcomes for patients. We believe machine learning could help with this. It could also empower GPs, helping them to detect and refer patients more effectively and efficiently.</p><img src="https://counter.theconversation.com/content/203786/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jonathan Kennedy does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Our new study demonstrates the enormous potential that machine learning has to help identify people with ASJonathan Kennedy, Data Lab Manager, Swansea UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2061682023-05-26T18:01:24Z2023-05-26T18:01:24ZIncluding race in clinical algorithms can both reduce and increase health inequities – it depends on what doctors use them for<figure><img src="https://images.theconversation.com/files/528403/original/file-20230525-15-2tu1k6.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2121%2C1412&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">An increasing number of health care decisions rely on information from algorithms.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/doctors-discussing-patients-test-results-royalty-free-image/1062188494">Tom Werner/Digital Vision via Getty Images</a></span></figcaption></figure><p>Health practitioners are <a href="https://doi.org/10.1056/NEJMms2004740">increasingly concerned</a> that because race is a social construct, and the biological mechanisms of how race affects clinical outcomes are often unknown, including race in predictive algorithms for clinical decision-making may worsen inequities.</p>
<p>For example, to calculate an estimate of kidney function called the <a href="https://doi.org/10.7326%2F0003-4819-150-9-200905050-00006">estimated glomerular filtration rate, or eGFR</a>, health care providers use an algorithm based on age, biological sex, race (Black or non-Black) and serum creatinine, a waste product the kidneys release into the blood. A higher eGFR value means better kidney health. These eGFR predictions are used to <a href="https://optn.transplant.hrsa.gov/professionals/by-organ/kidney-pancreas/kidney-allocation-system/">allocate kidney transplants in the U.S.</a></p>
<p>Based on this algorithm, which was <a href="https://www.kidney.org/atoz/content/race-and-egfr-what-controversy">trained on actual GFR values from patients</a>, a Black patient would be assigned a higher eGFR than a non-Black patient of the same age, sex and serum creatinine level. This implies that some Black patients would be considered to have healthier kidneys than otherwise similar non-Black patients and less likely to be assigned a kidney transplant.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/1O7Ov1nxMc0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Biased clinical algorithms can lead to inaccurate diagnoses and delayed treatment.</span></figcaption>
</figure>
<p>In 2021, however, researchers found that excluding race in the original eGFR equations could <a href="https://doi.org/10.1056/NEJMoa2102953">lead to larger discrepancies</a> between estimated and actual GFR values for both Black and non-Black patients. They also found adding an additional biomarker called cystatin C can improve predictions. However, even with this biomarker, excluding race from the algorithm still led to elevated discrepanies across races.</p>
<p>I am a <a href="https://scholar.google.com/citations?user=AR72duAAAAAJ&hl=en">health economist and statistician</a> who studies how unobserved factors in data can result in biases that lead to inefficiencies, inequities and disparities in health care. My recently published research suggests that excluding race from certain diagnostic algorithms <a href="https://www.science.org/doi/10.1126/sciadv.add2704">could worsen health inequities</a>.</p>
<h2>Different approaches to fairness</h2>
<p>Researchers use <a href="https://plato.stanford.edu/entries/economic-justice/">different economic frameworks</a> to understand how society allocates resources. Two key frameworks are utilitarianism and equality of opportunity.</p>
<p>A purely <a href="https://doi.org/10.3386/w30700">utilitarian outlook</a> seeks to identify what features would get the most out of a positive outcome or reduce the harm from a negative one, ignoring who possesses those features. This approach allocates resources to those with the most opportunities to generate positive outcomes or mitigate negative ones.</p>
<p>A utilitarian approach would always include race and ethnicity to improve the prediction power and accuracy of algorithms, regardless of whether it’s fair. For example, utilitarian policies would aim to maximize overall survival among people seeking organ transplants. They would allocate organs to those who would survive the longest from transplantation, even if those who may not survive the longest due to circumstances outside their control and need the organs most would die sooner without the transplant.</p>
<p>Although utilitarian approaches do not take fairness into account, an approach that does would ask two questions: How do we define fairness? Are there conditions when maximizing an algorithm’s prediction power and accuracy would not conflict with fairness?</p>
<p>To answer these questions, I apply the <a href="https://www.jstor.org/stable/41106460">equality of opportunity</a> framework, which aims to allocate resources in a way that allows everyone the same chance of obtaining similar outcomes, without being disadvantaged by circumstances outside of their control. Researchers have used this framework in many contexts, such as <a href="https://www.jstor.org/stable/447264">political science</a>, <a href="https://press.uchicago.edu/ucp/books/book/chicago/E/bo22415931.html">economics</a> and <a href="https://www.jpe.ox.ac.uk/papers/what-makes-discrimination-wrong/">law</a>. The U.S. Supreme Court has also applied equality of opportunity in <a href="https://edeq.stanford.edu/sections/section-4-lawsuits/landmark-us-cases-related-equality-opportunity-k-12-education">several landmark rulings in education</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/528406/original/file-20230525-21-rcdl1v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Health care worker looking at tablet in an exam room" src="https://images.theconversation.com/files/528406/original/file-20230525-21-rcdl1v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/528406/original/file-20230525-21-rcdl1v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/528406/original/file-20230525-21-rcdl1v.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/528406/original/file-20230525-21-rcdl1v.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/528406/original/file-20230525-21-rcdl1v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/528406/original/file-20230525-21-rcdl1v.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/528406/original/file-20230525-21-rcdl1v.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Including different variables in clinical algorithms can lead to very different results.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/unrecognizeable-person-using-digital-tablet-royalty-free-image/1421626437">SDI Productions/E+ via Getty Images</a></span>
</figcaption>
</figure>
<h2>Equality of opportunity</h2>
<p>There are two fundamental principles in equality of opportunity.</p>
<p>First, inequality of outcomes is unethical if it results from differences in circumstances that are outside of an individual’s own control, such as the income of a child’s parents, exposure to systemic racism or living in <a href="https://theconversation.com/black-mothers-trapped-in-unsafe-neighborhoods-signal-the-stressful-health-toll-of-gun-violence-in-the-u-s-203307">violent and unsafe environments</a>. This can be remedied by compensating individuals with disadvantaged circumstances in a way that allows them the same opportunity to obtain certain health outcomes as those who are not disadvantaged by their circumstances.</p>
<p>Second, inequality of outcomes for people in similar circumstances that result from differences in individual effort, such as practicing health-promoting behaviors like diet and exercise, is not unethical, and policymakers can reward those achieving better outcomes through such behaviors. However, differences in individual effort that occur because of circumstances, such as living in an area with <a href="https://theconversation.com/how-urban-planning-and-housing-policy-helped-create-food-apartheid-in-us-cities-154433">limited access to healthy food</a>, are not addressed under equality of opportunity. Keeping all circumstances the same, any differences in effort between individuals should be due to preferences, free will and perceived benefits and costs. This is called <a href="https://doi.org/10.1257/jel.20151206">accountable effort</a>. So, two individuals with the same circumstances should be rewarded according to their accountable efforts, and society should accept the resulting differences in outcomes.</p>
<p>Equality of opportunity implies that if algorithms were to be used for clinical decision-making, then it is necessary to understand what causes variation in the predictions they make. </p>
<p>If variation in predictions results from differences in circumstances or biological conditions but not from individual accountable effort, then it is appropriate to use the algorithm for compensation, such as allocating kidneys so everyone has an equal opportunity to live the same length of life, but not for reward, such as allocating kidneys to those who would live the longest with the kidneys.</p>
<p>In contrast, if variation in predictions results from differences in individual accountable effort but not from their circumstances, then it is appropriate to use the algorithm for reward but not compensation.</p>
<h2>Evaluating clinical algorithms for fairness</h2>
<p>To hold machine learning and other artificial intelligence algorithms accountable to a standard of equity, I applied the principles of equality of opportunity to
<a href="https://www.science.org/doi/10.1126/sciadv.add2704">evaluate whether race should be included</a> in clinical algorithms. I ran simulations under both ideal data conditions, where all data on a person’s circumstances is available, and real data conditions, where some data on a person’s circumstances is missing.</p>
<p>In these simulations, I unequivocally assume that <a href="https://www.genome.gov/genetics-glossary/Race">race is a social and not biological construct</a>. Variables such as race and ethnicity are often <a href="https://www.ama-assn.org/press-center/press-releases/new-ama-policies-recognize-race-social-not-biological-construct">proxies for various circumstances</a> individuals face that are out of their control, such as systemic racism that contributes to health disparities.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/926PqQUOVOg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">As a social construct, race is often a proxy for nonbiological circumstances.</span></figcaption>
</figure>
<p>I evaluated two categories of algorithms.</p>
<p>The first, diagnostic algorithms, makes predictions based on outcomes that have already occurred at the time of decision-making. For example, diagnostic algorithms are used to predict the presence of gallstones in patients with abdominal pain or urinary tract infections, or to detect breast cancer using radiologic imaging.</p>
<p>The second, prognostic algorithms, predicts future outcomes that have not yet occurred at the time of decision-making. For example, prognostic algorithms are used to predict whether a patient will live if they do or do not obtain a kidney transplant.</p>
<p>I found that, under an equality of opportunity approach, diagnostic models that do not take race into account would <a href="https://www.science.org/doi/10.1126/sciadv.add2704">increase systemic inequities and discrimination</a>. I found similar results for prognostic models intended to compensate for individual circumstances. For example, excluding race from algorithms that predict the future survival of patients with kidney failure would fail to identify those with underlying circumstances that make them more vulnerable.</p>
<p>Including race in prognostic models intended to reward individual efforts <a href="https://www.science.org/doi/10.1126/sciadv.add2704">can also increase disparities</a>. For example, including race in algorithms that predict how much longer a person would live after a kidney transplant may fail to account for individual circumstances that could limit how much longer they live.</p>
<h2>Unanswered questions and future work</h2>
<p>Better biomarkers may one day be able to better predict health outcomes than race and ethnicity. Until then, including race in certain clinical algorithms could help reduce disparities.</p>
<p>Although my study uses an equality of opportunity framework to measure how race and ethnicity affect the results of prediction algorithms, researchers don’t know whether other ways to approach fairness would lead to different recommendations. How to choose between different approaches to fairness also remains to be seen. Moreover, there are questions about how multiracial groups should be coded in health databases and algorithms.</p>
<p><a href="https://sop.washington.edu/choice/">My colleagues and I</a> are exploring many of these unanswered questions to reduce algorithmic discrimination. We believe our work will readily extend to other areas outside of health, including education, crime and labor markets.</p><img src="https://counter.theconversation.com/content/206168/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anirban Basu received funding support from a consortium of ten biomedical companies to the University of Washington through an unrestricted gift. </span></em></p>Biased algorithms in health care can lead to inaccurate diagnoses and delayed treatment. Deciding which variables to include to achieve fair health outcomes depends on how you approach fairness.Anirban Basu, Professor of Health Economics, University of WashingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2037232023-05-24T17:03:32Z2023-05-24T17:03:32ZThe UK public sector is already using AI more than you realise – without oversight it’s impossible to understand the risks<figure><img src="https://images.theconversation.com/files/527233/original/file-20230519-21-a54gyh.jpg?ixlib=rb-1.1.0&rect=62%2C31%2C2933%2C1963&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/face-detection-recognition-citizens-people-ai-1791158417">DedMityay/Shutterstock</a></span></figcaption></figure><p>The rapid rise of artificial intelligence (AI) products like the text-generating tool ChatGPT has politicians, technology leaders, artists and researchers <a href="https://www.telegraph.co.uk/business/2023/05/05/joe-biden-elon-musk-terrified-ai/">worried</a>. Meanwhile, proponents argue that AI could improve lives in fields like <a href="https://theconversation.com/ai-has-potential-to-revolutionise-health-care-but-we-must-first-confront-the-risk-of-algorithmic-bias-204112">healthcare</a>, <a href="https://www.unesco.org/en/digital-education/artificial-intelligence">education</a> and <a href="https://www.lse.ac.uk/granthaminstitute/news/ai-will-accelerate-tipping-points-for-crucial-green-technologies/">sustainable energy</a>. </p>
<p>The UK government is keen to embed AI in its day-to-day operations and set out a <a href="https://www.gov.uk/government/publications/national-ai-strategy">national strategy</a> to do just that in 2021. The aim, according to the strategy, is to “lead from the front and set an example in the safe and ethical deployment of AI”. </p>
<p>AI <a href="https://www.opendemocracy.net/en/openjustice/unlawful-state/price-and-prejudice-automated-decision-making-and-uk-government/">is not without risks</a>, particularly when it comes to individual rights and discrimination. These are risks the government <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/957259/Review_into_bias_in_algorithmic_decision-making.pdf">is aware of</a>, but a <a href="https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach">recent policy white paper</a> shows the government is reluctant to increase AI regulation. It is difficult to imagine how “safe and ethical deployment” can be achieved without this.</p>
<p>Evidence from other countries shows the downsides of using AI in the public sector. Many in the Netherlands are still reeling from a <a href="https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/">scandal</a> related to the use of machine learning to detect welfare fraud. Algorithms were found to have falsely accused thousands of parents of child benefits fraud. Cities across the country are <a href="https://www.lighthousereports.com/investigation/the-algorithm-addiction/">reportedly still using such technology</a> to target low-income neighbourhoods for fraud investigations, with devastating consequences for people’s wellbeing.</p>
<p>An investigation in <a href="https://www.lighthousereports.com/investigation/spains-ai-doctor/">Spain</a> revealed deficiencies in software used to determine whether people were committing sickness benefit fraud. And in <a href="https://algorithmwatch.org/en/algorithm-school-system-italy/">Italy</a>, a faulty algorithm excluded much-needed qualified teachers from open jobs. It rejected their CVs entirely after considering them for only one job, rather than matching them to another suitable opening.</p>
<p>Public sector dependence on AI could also lead to <a href="https://www.ncsc.gov.uk/collection/machine-learning">cybersecurity risks</a>, or <a href="https://cetas.turing.ac.uk/publications/mitigating-supply-chain-threats-building-resilience-through-ai-enabled-early-warning">vulnerabilities in critical infrastructure</a> supporting the NHS and other essential public services. </p>
<p>Given these risks, it’s crucial that citizens can trust the government to be transparent about their use of AI. But the government is generally very slow, or unwilling to disclose details about this – something the parliamentary committee on standards in public life has <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/868284/Web_Version_AI_and_Public_Standards.PDF">heavily criticised</a>.</p>
<p>The government’s Centre for Data Ethics and Innovation <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/957259/Review_into_bias_in_algorithmic_decision-making.pdf">recommended</a> publicising all uses of AI in significant decisions that affect people. The government subsequently developed <a href="https://www.gov.uk/government/collections/algorithmic-transparency-recording-standard-hub">one of the world’s first algorithmic transparency standards</a>, to encourage organisations to disclose to the public information about their use of AI tools and how they work. Part of this involves recording the information in a central repository.</p>
<p>However, the government made its use voluntary. So far, <a href="https://www.gov.uk/government/collections/algorithmic-transparency-reports">only six public sector organisations have</a> disclosed details of their AI use.</p>
<h2>Public sector AI use</h2>
<p>The legal charity <a href="https://publiclawproject.org.uk/">Public Law Project</a> recently launched a database showing that the use of AI in the UK public sector is much more widespread than official disclosures show. Through freedom of information requests, the <a href="https://publiclawproject.org.uk/resources/the-tracking-automated-government-register/">Tracking Automated Government (TAG) register</a> has, so far, tracked 42 instances of the public sector using AI.</p>
<p>Many of the tools are related to fraud detection and immigration decision-making, including <a href="https://freemovement.org.uk/home-office-refuses-to-disclose-inner-workings-of-sham-marriage-algorithm/">detecting sham marriages</a> or <a href="https://www.gov.uk/government/news/cutting-edge-data-and-ai-tech-to-help-government-hunt-down-fraudsters">fraud against the public purse</a>. Nearly half of UK’s local councils are also using AI to <a href="https://www.theguardian.com/society/2020/oct/28/nearly-half-of-councils-in-great-britain-use-algorithms-to-help-make-claims-decisions">prioritise access to housing benefits</a>. </p>
<p>Prison officers are using algorithms to <a href="https://www.thebureauinvestigates.com/stories/2019-11-14/prisoner-risk-algorithm-could-program-in-racism">assign newly convicted prisoners into risk categories</a>. Several police forces are using AI to assign similar risk scores, or trialling AI-based <a href="https://www.theguardian.com/uk-news/2023/may/03/metropolitan-police-live-facial-recognition-in-crowds-at-king-charles-coronation">facial recognition</a>. </p>
<p>The fact that the TAG register has publicised the use of AI in the public sector does not necessarily mean that the tools are harmful. But in most cases, the database adds this note: “The public body has not disclosed enough information to allow proper understanding of the specific risks posed by this tool.” People affected by these decisions can hardly be in a position to challenge them if it is not clear that AI is being used, or how. </p>
<p>Under the <a href="https://www.legislation.gov.uk/ukpga/2018/12/contents">Data Protection Act 2018</a>, people have the <a href="https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/rights-related-to-automated-decision-making-including-profiling/">right to an explanation</a> about automated decision making that has legal or similarly significant effects on them. But the government is <a href="https://publiclawproject.org.uk/resources/data-bill-no-2-puts-rights-at-risk-again/">proposing to cut back these rights</a> too. And even in their current form, they aren’t enough to tackle the wider social impacts of discriminatory algorithmic decision-making. </p>
<figure class="align-center ">
<img alt="A woman holds a paper letter and looks sad and concerned at it" src="https://images.theconversation.com/files/527244/original/file-20230519-29-wgefkd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/527244/original/file-20230519-29-wgefkd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/527244/original/file-20230519-29-wgefkd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/527244/original/file-20230519-29-wgefkd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/527244/original/file-20230519-29-wgefkd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/527244/original/file-20230519-29-wgefkd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/527244/original/file-20230519-29-wgefkd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Without more transparency, people may struggle to challenge algorithm-made decisions.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/all-lost-frustrated-millennial-businesswoman-receiving-1874616262">fizkes/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Light-touch regulation</h2>
<p>The government detailed its “pro-innovation” approach to AI regulation in a white paper, published March 2023, that sets <a href="https://www.gov.uk/government/news/uk-unveils-world-leading-approach-to-innovation-in-first-artificial-intelligence-white-paper-to-turbocharge-growth">five principles of AI regulation</a>, including safety, transparency and fairness.</p>
<p>The paper confirmed that the government does not plan to create a new AI regulator and that there will be no new AI legislation any time soon, instead tasking existing regulators with developing more detailed guidance.</p>
<p>And despite just six organisations using it so far, the government does not intend to mandate the use of the <a href="https://www.gov.uk/government/collections/algorithmic-transparency-reports">transparency standard</a> and central repository it developed. Nor are there plans to require public sector bodies to apply for a licence to use AI.</p>
<p>Without transparency or regulation, unsafe and unethical AI uses will be difficult to identify and are likely to come to light only after they have already done harm. And without additional rights for people, it will also be difficult to push back against public sector AI use or to claim compensation. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/uk-risks-losing-out-on-hi-tech-growth-if-it-falters-on-ai-regulation-202817">UK risks losing out on hi-tech growth if it falters on AI regulation</a>
</strong>
</em>
</p>
<hr>
<p>Put simply, the government’s pro-innovation approach to AI does not include any tools to ensure it will meet its mission to “lead from the front and set an example in the safe and ethical deployment of AI”, despite the prime minister’s claim that <a href="https://www.theguardian.com/technology/2023/may/18/uk-will-lead-on-guard-rails-to-limit-dangers-of-ai-says-rishi-sunak">the UK will lead on “guard rails” to limit dangers of AI</a>.</p>
<p>The stakes are too high for citizens to pin their hopes on the public sector regulating itself, or imposing safety and transparency <a href="https://www.howtocrackanut.com/digital-procurement-governance">requirements on tech companies</a>.</p>
<p><a href="https://committees.parliament.uk/writtenevidence/113034/pdf/">In my view</a>, a government committed to proper AI governance would create a dedicated and well-resourced authority to oversee AI use in the public sector. Society can hardly extend a blank cheque for the government to use AI as it sees fit. However, that is what the government seems to expect.</p><img src="https://counter.theconversation.com/content/203723/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Albert Sanchez-Graells received funding from the British Academy. He is one of the Academy's 2022 Mid-Career Fellows (MCFSS22\220033, £127,125.58). His research and views are however not attributable to the British Academy.</span></em></p>Without more transparency about AI use, it will be difficult for people to challenge biased decisions against them.Albert Sanchez-Graells, Professor of Economic Law and Co-Director of the Centre for Global Law and Innovation, University of BristolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2059952023-05-22T20:06:34Z2023-05-22T20:06:34ZWhat is Bluesky and how’s it different to Twitter?<figure><img src="https://images.theconversation.com/files/527163/original/file-20230519-27-f9etwc.jpg?ixlib=rb-1.1.0&rect=8%2C16%2C5447%2C3620&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Amid management changes at Twitter, discontented users are exploring an alternative social media platform called Bluesky. According to media <a href="https://www.thewrap.com/bluesky-app-downloads-surge-jack-dorsey-twitter/">reports</a>, downloads of the Bluesky app surged more than 600% in April.</p>
<p>Initially conceived by Twitter co-founder Jack Dorsey in 2019 as a complementary project aimed to improve Twitter user experience, Bluesky transitioned into a standalone project in <a href="https://fortune.com/2023/04/28/jack-dorsey-bluesky-biggest-single-day-jump-new-users/">early 2022</a>, and its iOS app was released in February <a href="https://www.theguardian.com/technology/2023/may/16/it-has-high-ambitions-but-can-jack-dorseys-spinoff-bluesky-really-take-over-from-twitter">this year</a> followed by an Android version in <a href="https://www.theverge.com/2023/4/19/23690314/bluesky-decentralized-twitter-alternative-android">April</a>.</p>
<p>Visually, Bluesky looks similar to Twitter. The timeline is called the “skyline” and tweets are “skeets”. It has two main differences that drive its popularity – decentralisation and invite-only access. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1652018313797709824"}"></div></p>
<p>Decentralisation was a driving force behind Dorsey’s creation of Bluesky. So what does that mean and how’s this app different to Twitter?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-is-mastodon-the-twitter-alternative-people-are-flocking-to-heres-everything-you-need-to-know-194059">What is Mastodon, the 'Twitter alternative' people are flocking to? Here's everything you need to know</a>
</strong>
</em>
</p>
<hr>
<h2>‘Decentralised’ social media</h2>
<p>Dorsey is a big proponent of decentralised control and cryptocurrency. He believes centralised platforms like Twitter cannot address issues such as enforcement of policies to address abuse and misinformation, and the proprietary algorithms are not meeting user needs. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1204766084353544192"}"></div></p>
<p>Twitter uses an AI-powered, centrally managed algorithm to moderate what content the user is exposed to. </p>
<p>On Bluesky, however, users have control over the algorithm that selects what they are exposed to. As Wired magazine <a href="https://www.wired.com/story/bluesky-twitter-social-media/">explained</a>:</p>
<blockquote>
<p>Crucially, users and servers will be able to label posts or specific users - e.g., with a tag like “racist” — and anyone can subscribe to that list of labels, blocking posts on that basis.</p>
</blockquote>
<p>Bluesky <a href="https://twitter.com/bluesky/status/1641845604807745536">calls</a> this concept a “composable, customizable marketplace of algorithms that lets you take control of how you spend your attention.”</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1646676663965745152"}"></div></p>
<p>In addition to giving users more control over what kind of content they see, Bluesky has plans to “decentralise” control of social media even further. If all goes well, Bluesky itself will just be the first of many interconnected social networks running on the same basic principles.</p>
<p>Bluesky is based on what it calls the <a href="https://twitter.com/bluesky/status/1582437531278540800">AT protocol</a>, a network that allows servers to communicate with each other. This means that, hypothetically, you could <a href="https://blueskyweb.xyz/blog/5-19-2023-user-faq">move your account</a> between different social networks that also use the AT protocol without losing your content and followers. </p>
<p>It’s worth noting this is all a bit theoretical for now; this functionality can’t be used yet.</p>
<p>But it is designed to eventually address the <a href="https://blueskyweb.xyz/blog/5-19-2023-user-faq">concerns</a> of social media influencers who fear losing their audience due to platform rule changes or when choosing to move to a different platform. </p>
<h2>Invite-only</h2>
<p>Another distinguishing factor of Bluesky is that, for now anyway, it is invitation-only.</p>
<p>Most social media platforms, including Twitter, allow users to register freely. Bluesky, however, requires an invitation code. Existing users receive invitation codes fortnightly. </p>
<p>Despite at least 360,000 Bluesky app <a href="https://abcnews.go.com/Business/bluesky-social-twitter-alternative/story?id=99039118">downloads</a>, it’s been <a href="https://www.theguardian.com/technology/2023/may/16/it-has-high-ambitions-but-can-jack-dorseys-spinoff-bluesky-really-take-over-from-twitter">reported</a> there are only 70,000 users. Media reported earlier this month there were a staggering <a href="https://www.businessinsider.com/dorsey-bluesky-invite-social-exclusive-waitlist-no-heads-of-state-2023-5">1.9 million people</a> on the waitlist.</p>
<p>With so many people curious to get in, the Bluesky invites became a hot commodity. You can find them on eBay between A$50 and $200; some listings were asking much more.</p>
<p>The invitation-only design ensures steady user growth, avoiding a rapid influx of users followed by a sudden loss of interest.</p>
<p>And potential new users who patiently wait for an invitation are already familiar with Bluesky. Flooding other social media platforms with requests for <a href="https://www.theguardian.com/technology/2023/may/16/it-has-high-ambitions-but-can-jack-dorseys-spinoff-bluesky-really-take-over-from-twitte">invitation codes creates extra interest</a>, too. </p>
<p>Every new Bluesky user knows at least one existing user. It ensures users have something in common to post about. </p>
<p>It would seem Bluesky’s creators aimed to selectively bring in like-minded individuals from the start, rather than attempting to retrospectively eliminate problematic users.</p>
<p>Thanks to a great deal of user control over the content they see, and a small and selective user base so far, many report they’ve <a href="https://www.vox.com/technology/2023/4/29/23702979/bluesky-twitter-elon-musk-jack-dorsey-chrissy-teigen-aoc-dril-decentralized">found</a> a friendly atmosphere and good <a href="https://www.abc.net.au/news/science/2023-05-11/what-is-bluesky-and-can-it-replace-elon-musks-twitter/102316800">vibes</a> on Bluesky. </p>
<p>Others <a href="https://www.nytimes.com/2023/05/05/podcasts/hard-fork-bluesky-ai-jobs.html">say</a> it feels almost like a group chat. Bluesky has particularly resonated with marginalised communities, especially <a href="https://www.nbcnews.com/tech/black-tech-twitter-trans-users-marginalized-groups-flock-bluesky-rcna82442">transgender people</a>, who may feel safer there expressing themselves than on other social media sites.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/527432/original/file-20230522-23-by2qo7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/527432/original/file-20230522-23-by2qo7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/527432/original/file-20230522-23-by2qo7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/527432/original/file-20230522-23-by2qo7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/527432/original/file-20230522-23-by2qo7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/527432/original/file-20230522-23-by2qo7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/527432/original/file-20230522-23-by2qo7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/527432/original/file-20230522-23-by2qo7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Many Twitter users have flocked to Bluesky.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>But will any of this last?</h2>
<p>As we’ve all seen, social media sites come and go.</p>
<p>Social media site Mastodon experienced explosive user growth in November last year, reaching <a href="https://fortune.com/2022/11/28/mastodon-social-ceo-eugen-rochko-twitter-elon-musk/">2.6 million</a> users within weeks, only to decline to <a href="https://www.theguardian.com/technology/2023/apr/18/mastodon-users-twitter-elon-musk-social-media">1.2 million</a> within a couple of months. </p>
<p>Decentralised moderation <a href="https://www.webpurify.com/blog/moderating-mastodon-and-the-fediverse/">challenges</a> on Mastodon have resulted in what <a href="https://www.reddit.com/r/Mastodon/comments/103m54p/having_trouble_finding_lighter_funny_casual/">some</a> <a href="https://twitter.com/KateElliottSFF/status/1627172974259499009">users</a> have <a href="https://twitter.com/mutualaidalt/status/1593933691210076161">described</a> as a “stuffy” culture. This, coupled with the complicated interface and the hard to grasp <a href="https://www.cnet.com/tech/services-and-software/what-is-mastodon-the-alternative-social-network-now-blocked-by-twitter/">concept</a> of “belonging” to a server, may have affected its chance of lasting success.</p>
<p>Unlike Mastodon, Bluesky has a simple and straightforward interface. To remain relevant in the long term, Bluesky must strike a delicate balance between curbing hate speech and trolls while maintaining engaging content and discussions. All while being more captivating than your inner-circle group chats.</p><img src="https://counter.theconversation.com/content/205995/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nataliya Ilyushina receives funding from the ARC Centre of Excellence for Automated Decision-Making and Society.</span></em></p>Twitter uses an AI-powered centrally managed algorithm to moderate what you see. On Bluesky, you have control over the algorithm that selects what you see through so-called ‘composable moderation’.Nataliya Ilyushina, Research Fellow, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2038882023-05-22T12:27:06Z2023-05-22T12:27:06ZWhat is a black box? A computer scientist explains what it means when the inner workings of AIs are hidden<figure><img src="https://images.theconversation.com/files/527116/original/file-20230518-29-egvjik.jpg?ixlib=rb-1.1.0&rect=0%2C20%2C4500%2C2970&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">You can't see inside any opaque box, but the color black adds an air of mystery.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/black-box-levitation-on-black-background-3d-royalty-free-image/610655646">chingraph/iStock via Getty Images</a></span></figcaption></figure><p>For some people, the term “black box” brings to mind the recording devices in airplanes that are valuable for postmortem analyses if the unthinkable happens. For others it evokes small, minimally outfitted theaters. But black box is also an important term in the world of artificial intelligence. </p>
<p>AI <a href="https://www.techtarget.com/whatis/definition/black-box-AI">black boxes</a> refer to AI systems with internal workings that are invisible to the user. You can feed them input and get output, but you cannot examine the system’s code or the logic that produced the output. </p>
<p>Machine learning is the dominant subset of artificial intelligence. It underlies generative AI systems like <a href="https://openai.com/blog/chatgpt">ChatGPT</a> and <a href="https://openai.com/product/dall-e-2">DALL-E 2</a>. There are three components to machine learning: an algorithm or a set of algorithms, training data and a model. An algorithm is a set of procedures. In machine learning, an algorithm learns to identify patterns after being trained on a large set of examples – the training data. Once a machine-learning algorithm has been trained, the result is a machine-learning model. The model is what people use. </p>
<p>For example, a machine-learning algorithm could be designed to identify patterns in images, and training data could be images of dogs. The resulting machine-learning model would be a dog spotter. You would feed it an image as input and get as output whether and where in the image a set of pixels represents a dog.</p>
<p>Any of the three components of a machine-learning system can be hidden, or in a black box. As is often the case, the algorithm is publicly known, which makes putting it in a black box less effective. So to protect their intellectual property, AI developers often put the model in a black box. Another approach software developers take is to obscure the data used to train the model – in other words, put the training data in a black box.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/Q6JbmGQstDM?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Black box algorithms make it very difficult to understand how AIs work, but the situation isn’t quite black and white.</span></figcaption>
</figure>
<p>The opposite of a black box is sometimes referred to as a <a href="https://www.tutorialspoint.com/software_testing_dictionary/glass_box_testing.htm">glass box</a>. An AI glass box is a system whose algorithms, training data and model are all available for anyone to see. But researchers sometimes characterize aspects of even these as black box. </p>
<p>That’s because researchers <a href="https://www.vice.com/en/article/y3pezm/scientists-increasingly-cant-explain-how-ai-works">don’t fully understand</a> how machine-learning algorithms, particularly <a href="https://www.pcmag.com/news/what-is-deep-learning">deep-learning</a> algorithms, operate. The field of <a href="https://theconversation.com/how-explainable-artificial-intelligence-can-help-humans-innovate-151737">explainable AI</a> is working to develop algorithms that, while not necessarily glass box, can be better understood by humans.</p>
<h2>Why AI black boxes matter</h2>
<p>In many cases, there is good reason to be wary of black box machine-learning algorithms and models. Suppose a machine-learning model has made a diagnosis about your health. Would you want the model to be black box or glass box? What about the physician prescribing your course of treatment? Perhaps she would like to know how the model arrived at its decision. </p>
<p>What if a machine-learning model that determines whether you qualify for a business loan from a bank turns you down? Wouldn’t you like to know why? If you did, you could more effectively appeal the decision, or change your situation to increase your chances of getting a loan the next time.</p>
<p>Black boxes also have important implications for software system security. For years, many people in the computing field thought that keeping software in a black box would prevent hackers from examining it and therefore it would be secure. This assumption has largely been proved wrong because hackers can <a href="https://www.merriam-webster.com/dictionary/reverse%20engineer">reverse-engineer</a> software – that is, build a facsimile by closely observing how a piece of software works – and discover vulnerabilities to exploit. </p>
<p>If software is in a glass box, then software testers and well-intentioned hackers can examine it and inform the creators of weaknesses, thereby minimizing cyberattacks.</p><img src="https://counter.theconversation.com/content/203888/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Saurabh Bagchi receives research funding from a large number of sources, federal government, state government, and private enterprises. The full list can be seen from his CV at:
<a href="https://bagchi.github.io/vita.html">https://bagchi.github.io/vita.html</a>
Bagchi is an office bearer of IEEE Computer Society. He is the co-founder and CTO of a cloud computing startup, KeyByte. </span></em></p>Metaphorical black boxes shield the inner workings of AIs, which protect software developers’ intellectual property. They also make it hard to understand how the AIs work – and why things go wrong.Saurabh Bagchi, Professor of Electrical and Computer Engineering, Purdue UniversityLicensed as Creative Commons – attribution, no derivatives.